Oh good to know.
It used to be awful but I’m glad to hear it’s improving.
Oh good to know.
It used to be awful but I’m glad to hear it’s improving.
Maybe snapdrop?
When I was obsd I did FTP and rsync for everything. Syncthing had dinner performance issues for me.
Maybe Seafile but I had a bad time with that.
From memory MTP is pretty flaky and quite slow.
ADB push is pretty good but at that stage rsync
is just as easy.
Put SSH in the phone and you can do it all from the computer too.
Are you able to buy unlocked directly from Google? I typically avoid the carrier when I can.
Yeah I did the smart thing and walked away from research.
Total dead end, Industry don’t care at all.
To be fair, wireguard is pretty painless.
Absolutely that’s what the internet was made for!
But family photos keep a bit more secure, Particularly if it’s syncing directly from your phone, I take a lot of explicit photos of my wife, but also code that I’m writing on my computer, or the kids playing, etc.
I don’t think I would have made too much of a difference because the state-of-the-art models still aren’t a database.
Maybe more recent models could store more information in a smaller number of parameters, but it’s probably going to come down to the size of the model.
The Only exception there is if there is indeed some pattern in modern history that the model is able to learn, but I really doubt that.
What this article really calls to light is that people tend to use these models for things that they’re not good at because it’s being marketed contrary to what it is.
I think they all would have performed significantly better with a degree of context.
Trying to use a large language model like a database is simply A misapplication of the technology.
The real question is if you gave a human an entire library of history. Would they be able to identify relevant paragraphs based on a paragraph that only contains semantic information? The answer is probably not. This is the way that we need to be using these things.
Unfortunately companies like openai really want this to be the next Google because there’s so much money to be hired by selling this is a product to businesses who don’t care to roll more efficient solutions.
Well, that’s simply not true. The llm is simply trained on patterns. Human history doesn’t really have clear rules such like programming languages, so it’s not going to be able to internalise that very well. But the English language does have patterns so If you used a Semantic or hybrid Search over a corpus of content and then used an LLM to synthesise well structured summaries and responses, it would probably be fairly usable.
The big challenge that we’re facing with media today is that many authors do not have any understanding of statistics, programming or data science/ ML.
Lllm is not ai, It’s simply an application of an NN over a large data set that works really well. So well, in fact that the runtime penalty is outweighed by its utility.
I would have killed for these a decade ago and they’re an absolute game changer With a lot of potential to do a lot of good. Unfortunately the uninitiated among us have elected to treat them like a silver bullet because they think it’s the next dot com bubble
Yeah man, why not, update here I’d love to check it out!
Maybe I should upload some content too :)
Content, no clue how to fix it though.
On the software side:
Well yeah, self-instruct is how a lot of these models are trained. Bootstrap training data off a larger model and fine-tune a pre-existing model of that.
It’s similar but different.
On some devices with Linux suspend can still consume a lot of power, I’ve had some pain with this in the past with Void but runit boots quick so non-issue.
I suppose another perspective is encryption, when the device Is powered off. It’s going to be encrypted so there might be an extra degree of security there.
When I was performing dart analytics and teaching at the same time I would turn off my machine between classes just in case. But I still wanted it to boot fast because I’d have to then go and teach.
Oh, that is a fair point. We did have a donor kebab place, Which definitely blow Burger King and McDonald’s out of the water.
But variety is nice and in the absence of that there was nothing other than KFC and Burger King.
A lot of areas don’t have other restaurants. It’s a sea of fast food that can only be driven to.
City is different, but before I had the savings to move, I u didn’t have much choice
It’s also just been a tough period to grow up. Depends on region but housing, did and cost of living in many have been significantly harder than in the past.
Every year is harder to get by than the last, despite unprecedented advances in science and tech.
Mobile offline sync is a lost cause. The dev environment, even on Android, is so hostile you’ll never get a good experience.
Joplin comes close, but it’s still extremely unreliable and I’ve had many dropped notes. It also takes hours to sync a large corpus.
I wrote my own web app using Axum and flask that I use. Check out dokuwiki as well.