• 0 Posts
  • 190 Comments
Joined 10 months ago
cake
Cake day: April 13th, 2024

help-circle









  • Hmm it’s difficult to quantify. On workday I spend an average of probably 6-8 hours on a computer with job related tasks. Not really coding most of the time, since we’re maintaining and building a network, so it’s more configuration, planning, coordination, and documentation work. Some days we’re out to actually deploy hardware, or run around and debug stuff, so it’s hard to estimate the average screentime.

    My free time involves a lot of computer time too, but it is split up into more smaller categories, either on the desktop computer or the smartphone computer. Manga, Games, Youtube, Movies, Anime Series, Lemmy, Pornography, News, Banking and Investments.

    In the end I think my job is the biggest unified chunk of time, but that’s kind of arbitrary, if I started subdividing it into different tasks maybe gaming would become the biggest chunk.


  • Yeah, this kinda bothers me with computer security in general. So, the above is really poor design, right? But that emerges from the following:

    • Writing secure code is hard. Writing bug-free code in general is hard, haven’t even solved that one yet, but specifically for security bugs you have someone down the line potentially actively trying to exploit the code.
    • It’s often not very immediately visible to anyone how actually secure code code is. Not to customers, not to people at the company using the code, and sometimes not even to the code’s author. It’s not even very easy to quantify security – I mean, there are attempts to do things like security certification of products, but…they’re all kind of limited.
    • Cost – and thus limitations on time expended and the knowledge base of whoever you have working on the thing – is always going to be present. That’s very much going to be visible to the company. Insecure code is cheaper to write than secure code.

    There is nothing wrong with your three points, in general. But I think there are some things in this given case that are very visible weak points before getting into the source code:

    • You should not have connections from the cars to the customer support domain at all. There should be a clear delineation between functions, and a single (redundant if necessary) connection gateway for the cars. This is to keep the attack surface small.

    • Authentication is always server side, passwords and reset-question-answers are the same in that regard. Even writing that code on the client was the wrong place from the start.

    • Resetting a password should involve verifying continued access to the associated email account.

    So it seems to me that here the fundamental design was not done securely, far before we get into the hard part of avoiding writing bugs or finding written bugs.

    This could have something to do with the existing structures. E.g. the CS platform was an external product and someone bolted on the password reset later in a bad way. The CS department needed to access details on cars during support calls and instead of going though the service that communicates with the cars usually, it was simpler to implement a separate connection to the cars directly. (I’m just guessing of course)

    Maybe besides cost, there is also an issue that nobody in the organization has an overall responsibility or the power to enforce a sensible design on the interactions between various systems.



  • That is obviously fake. I don’t even detect any real attempt to make it believable.

    Bold claims, no detail, perfectly aligned with the biggest fears of what Musk could have done. Even the stupid mention of AI agents and the link to an example in documentation, as if that random stuff was evidence that was congruent with the earlier claims.

    Plus they fucked up the internal consistency, despite how short the text is: In the intro our fake protagonist is a former X employee, in the third paragraph from the bottom they are saying “We’re currently doing the same thing in Germany”.