• 0 Posts
  • 51 Comments
Joined 9 months ago
cake
Cake day: May 29th, 2024

help-circle
  • So, keep in mind that single photon sensors have been around for awhile, in the form of avalanche photodiodes and photomultiplier tubes. And avalanche photodiodes are pretty commonly used in LiDAR systems already.

    The ones talked about in the article I linked collect about 50 points per square meter at a horizontal resolution of about 23 cm. Obviously that’s way worse than what’s presented in the phys.org article, but that’s also measuring from 3km away while covering an area of 700 square km per hour (because these systems are used for wide area terrain scanning from airplanes). With the way LiDAR works the system in the phys.org article could be scanning with a very narrow beam to get way more datapoints per square meter.

    Now, this doesn’t mean that the system is useless crap or whatever. It could be that the superconducting nanowire sensor they’re using lets them measure the arrival time much more precisely than normal LiDAR systems, which would give them much better depth resolution. Or it could be that the sensor has much less noise (false photon detections) than the commonly used avalanche diodes. I didn’t read the actual paper, and honestly I don’t know enough about LiDAR and photon detectors to really be able to compare those stats.

    But I do know enough to say that the range and single-photon capability of this system aren’t really the special parts of it, if it’s special at all.




  • Specifically they are completely incapable of unifying information into a self consistent model.

    To use an analogy you see a shadow and know its being cast by some object with a definite shape, even if you can’t be sure what that shape is. An LLM sees a shadow and its idea of what’s casting it is as fuzzy and mutable as the shadow itself.

    Funnily enough old school AI from the 70s, like logic engines, possessed a super-human ability for logical self consistancy. A human can hold contradictory beliefs without realizing it, a logic engine is incapable of self-contradiction once all of the facts in its database have been collated. (This is where the SciFi idea of robots like HAL-9000 and Data from Star Trek come from.) However this perfect reasoning ability left logic engines completely unable to deal with contradictory or ambiguous information, as well as logical paradoxes. They were also severely limited by the fact that practically everything they knew had to be explicitly programmed into them. So if you wanted one to be able to hold a conversion in plain English you would have to enter all kinds of information that we know implicitly, like the fact that water makes things wet or that most, but not all, people have two legs. A basically impossible task.

    With the rise of machine learning and large artificial neural networks we solved the problem of dealing with implicit, ambiguous, and paradoxical information but in the process completely removed the ability to logically reason.


  • That sounds absolutely fine to me.

    Compared to an NVME SSD, which is what I have my OS and software installed on, every spinning disk drive is glacially slow. So it really doesn’t make much of a difference if my archive drive is a little bit slower at random R/W than it otherwise would be.

    In fact I wish tape drives weren’t so expensive because I’m pretty sure I’d rather have one of those.

    If you need high R/W performance and huge capacity at the same time (like for editing gigantic high resolution videos) you probably want some kind of RAID array.






  • Unfortunately these bulbs didn’t have any components that could steer or modulate the electron beam, which is how CRT televisions form an image. Instead it just sprays a cone of electrons at the phosphor face to form a big blob of light, so the most you could do is make it brighter or darker (or make it flash) by turning the power up and down.

    The closest thing to what you’re imaging would be “pixel LED” headlights. That’s a car headlight technology that continually adjusts the shape of the light output to avoid shining onto cars in the opposite lane, allowing you to retain high beam brightness without blinding other drivers. It works by using essentially the same technology as a projector: an LED light shines onto a MEMS mirror array which can dynamically change the direction that each pixel is pointing to shape the light that is reflected off of it. Sensors detect the position of oncoming cars and direct that light shaping process so the light avoids them.

    You absolutely could form an image with one of those (projected onto a surface its shining on), though in the present day they’re only used in car headlights. I could see them eventually being used in room lighting though, if the price of MEMS chips comes down enough. They could be used to improve efficiency using anidolic lighting principles, and marketed as as a way light a room perfectly evenly, or direct pools of light to certain spots as the owner desired (a bit like how color changing smart bulbs are marketed today). Such a light source would have to scan the shape of the room, then decide how to aim its light into that space.

    See also Li-Fi if you’re interested in weird stuff piggy backing off of lighting technology. Hackers have actually used something like that (subtly modulating the brightness of a light source) to exfiltrate data:

    https://www.securityweek.com/ethernet-leds-can-be-used-exfiltrate-data-air-gapped-systems/

    https://thehackernews.com/2020/02/hacking-air-gapped-computers.html?m=1




  • Why would any of this be about you personally?

    Uh, hello? Do you want to think about why I wrote that? Do you need me to explain to you the idea that other users of the extension are mostly self interested but it is in their best interest to cooperate and share information if the extension is bad? That the greater the number of people with access to the source code the less likely it is that some subset of them could cooperate against some other subset? And therefore the more people looking at the source code there are, the less you have to trust any single person? You know, the same reason you won’t follow a single person into a dark alleyway but are comfortable standing in a crowded street? Because the first subset being “everyone”’ and the second one being “only you” is an extreme case that is basically impossible to happen, just like the Ohio conspiracy? Do you understand what a negative example is or are you gonna comment back “wow I can’t believe you think Ohio doesn’t exist and everyone in the world is out to get you, you must be a paranoid schizophrenic”?

    I honestly can’t take you seriously when this is your view of security

    This is the view of the majority of people that work in netsec. There’s a general sentiment that we should be reviewing code more, relying less on single-developer projects, and getting reproducible builds for everything, but nobody serious thinks that access to source code is a bad thing and usually it’s regarded as a positive.

    So in that sense uBlock is kinda bad because Gorhill does the vast majority of the work, but it would be even worse if it was closed source on top of that.

    "we caught em once so the system works”.

    As opposed to your system where you throw your hands up and say “you’re screwed either way, nothing you do matters, just admit it and give up!”, which has famously done so much good in the world.


  • I trust a random internet stranger that in theory is doing their work in public

    There’s no ‘in theory’ about it.

    I’ve actually had an extension I was using be revealed as spyware (it was hoverzoom, I immediately switched to an alternative afterward).

    I don’t read every line of every piece of software I use because that would be impossible, but I do actually look at some of it and modify it to suit my needs. It was because there are many thousands of people like me that do this that the problem in hoverzoom was caught. It’s been ten years, so I don’t have the best memory of the event, but I think it only took a few days to catch it as well, despite the fact that the offending code was left out of the GitHub repo and was only in the compiled extension.

    The state of open source isn’t perfect (not everything has reproducible builds yet) but in general I ‘trust’ that every other programmer in existence isn’t in on a conspiracy to screw me over specifically.


  • That’s what Google was trying to do, yeah, but IMO they weren’t doing a very good job of it (really old Google search was good if you knew how to structure your queries, but then they tried to make it so you could ask plain English questions instead of having to think about what keywords you were using and that ruined it IMO). And you also weren’t able to run it against your own documents.

    LLMs on the other hand are so good at statistical correlation that they’re able to pass the Turing test. They know what words mean in context (in as much they “know” anything) instead of just matching keywords and a short list of synonyms. So there’s reason to believe that if you were able to see which parts of the source text the LLM considered to be the most similar to a query that could be pretty good.

    There is also the possibility of running one locally to search your own notes and documents. But like I said I’m not sure I want to max out my GPU to do a document search.




  • Being able to summarize and answer questions about a specific corpus of text was a use case I was excited for even knowing that LLMs can’t really answer general questions or logically reason.

    But if Google search summaries are any indication they can’t even do that. And I’m not just talking about the screenshots people post, this is my own experience with it.

    Maybe if you could run the LLM in an entirely different way such that you could enter a question and then it tells you which part of the source text statistically correlates the most with the words you typed; instead of trying to generate new text. That way in a worse case scenario it just points you to a part of the source text that’s irrelevant instead of giving you answers that are subtly wrong or misleading.

    Even then I’m not sure the huge computational requirements make it worth it over ctrl-f or a slightly more sophisticated search algorithm.


  • Probably the weirdest kind of lightbulb I’ve heard of is the electron stimulated luminescence bulb.

    They were basically little CRT screens that produced white light instead of a picture. They had about the same efficiency and lifespan as CFL bulbs (which were around at the same time) but better color rendering capability (higher CRI). They also didn’t use mercury in their construction.

    They never caught on probably because of how bulky they were, with cost probably being a factor as well (though if they were as manufactured at the scales CFLs were the cost may have come down). Today LEDs are better than both of course.

    Speaking of cost and LEDs, it’s pretty remarkable just how cheap lighting has gotten. Consider this article, where they talk about the cost of producing light with candles vs with incandescent bulbs. But since 2006 we have developed some LED bulbs that approach or exceed 200 lumens per watt. That’s a more than 11x improvement over the 17 lumens per watt figure given in that article. That adds another .9 to the percentage cost drop before we even consider the longer lifetime of the LED bulb.

    I think I calculated at some point that Philips Ultra Efficient bulbs cost less than $1 per year per bulb to operate if you add the cost of power + the purchase price of the bulb amortized over its lifetime. At this point lighting up a room is almost free.