

Don’t touch that, it’s a load bearing 100Mbit switch.


Don’t touch that, it’s a load bearing 100Mbit switch.


That is kind of assuming the worst case scenario though. You wouldn’t assume that QA can read every email you send through their mail servers ”just because ”
This article sounds a bit like engagement bait based on the idea that any use of LLMs is inherently a privacy violation. I don’t see how pushing the text through a specific class of software is worse than storing confidential data in the mailbox though.
That is assuming that they don’t leak data for training but the article doesn’t mention that.


The rules only matter if the admins adhere to them and enforces them consistently.


It sounds like you are assuming that the wallet needs to re-validate each session and I don’t see why this would be needed. Each user account would just need to validate their age once then the website operator could store this in their database. If you’ve validated once you can be sure the user keeps being old enough.


They’re probably not going to use it…
… but if they do it’s going to be a hell of a good starting point in motivating people to leave Facebook


I believe something like this is supposed to be a use-case of the digital EU Wallet. A website is supposed to be able to receive an attestation of a users age without nessecarily getting any other information about the person.
https://en.wikipedia.org/wiki/EU_Digital_Identity_Wallet
Apparently the relevant feature is Electronic attestations of attributes (EAAs). I’m not really familiar with how it will be implemented though and I am a bit afraid of beurocratic design is going to fuck this up…
Imo something like this would be magnitudes better than the current reliance of video identification. Not only is it much more reliable, it will also not feel nearly as invasive as having to scan your face and hope the provider doesn’t save it somewhere.
Is there really a lot of AI generated doorbell camera videos out there? I can’t remember anything posted but then again maybe that just proves the point.
Then again the low resolution does make it much easier to hide typical artefacts and issues so I don’t think it proves anything.


Honestly you pretty much don’t. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It’s not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you’re going to be behind the purpose made GPUs with 80GB VRAM.
Maybe it could work for some use cases but I rather just don’t use AI.


Working at home since 2020, and while I agree with the advantages most people post here, I definitely miss talking with people over lunch, or even getting out for After Work beers now and then. (Obviously that depends a lot on if you like your coworkers or not)
This is apparently a super controversial opinion but I wouldn’t mind working somewhere that forces people to the office 2, maybe 3 days a week. Just not every day.


Maybe i misunderstand what you mean but yes, you kind of can. The problem in this case is that the user sends two requests in the same input, and the LLM isn’t able to deal with conflicting commands in the system prompt and the input.
The post you replied to kind of seems to imply that the LLM can leak info to other users, but that is not really a thing. As I understand when you call the LLM it’s given your input and a lot of context that can be a hidden system prompt, perhaps your chat history, and other data that might be relevant for the service. If everything is properly implemented any information you give it will only stay in your context. Assuming that someone doesn’t do anything stupid like sharing context data between users.
What you need to watch out for though, especially with free online AI services is that they may use anything you input to train and evolve the process. This is a separate process but if you give personal to an AI assistant it might end up in the training dataset and parts of it end up in the next version of the model. This shouldn’t be an issue if you have a paid subscription or an Enterprise contract that would likely state that no input data can be used for training.
I feel personally attacked.


For now, BMW is defaulting to a more traditional approach. If it requires a data package of some sort, it will probably have a recurring fee—and BMW says its customers are already comfortable subscribing to such add-ons.
Sounds like a fairly reasonable position imo, and that they listen to the outrage about heated seats (which tbh was ridiculous). I get the feeling that everyone who commented on this didn’t actually read the article, lol.
Full disclosure: I own a fairly recent BMW and do like it a lot. Would I have bought it with subscription based heated seats? Maybe not, but I do appreciate other things like having a physical button to go into battery save mode and not having to dive 3 touch screen menus down… or that it’s one of the most powerful hybrids in electric only mode (though not anymore I think)… or being generally more dialed back when it comes to driver assist features.
That said I will admit that it has a physical button that tells me to pay up when pressed, to enable automatic high beam control… though it’s not like it was an advertised feature (got it used).

It’s probably what has surprised me the most about all this, how much has happened on free hosted email accounts.
I guess it means that privacy from human eyes (i.e. not automated scanning) is pretty good. Or Google/Yahoo is in on a conspiracy but I can’t imagine regular operations staff being made aware of it.


I can’t really tell if you’re joking or not but no, I’m saying that it’s a bug, and at no point anything is sent off your computer


The first one was my an experimental instance and I will shut it down soon. Initially I planned to migrate the database to new hosting but I also regretted using a domain that was essentially a reference to reddit (.red). So I decided to start clean on another domain while it was only me who used it for a limited time.


I like that the article excerpt clearly says that it’s simply about files not being removed when the trash bin is emptied, and it’s a problem specific to the Canonical snap system… Yet every single other comment in here rants about Microsoft spyware. Not many people read beyond the headline, lol.


As someone who has been exploring the Fediverse for about 2 weeks, it’s been an overall good experience. I’ve been trying to use it instead of Reddit as much as possible, but it’s also apparent that there’s a lot less content.
I hope that the verse will continue to grow and that it will solve the content problem.
Otherwise the clients feel like a breath of fresh air and less enshittified than Reddit…
edit: Also the first comment talking about experiences that are just too jarring. I haven’t encountered anything like that but maybe I am just a sweet summer child.
That seems to be the terms for the personal edition of Microsoft 365 though? I’m pretty sure the enterprise edition that has the features like DLP and tagging content as confidential would have a separate agreement where they are not passing on the data.
That is like the main selling point of paying extra for enterprise AI services over the free publicly available ones.
Unless this boundary has actually been crossed in which case, yes. It’s very serious.