• 0 Posts
  • 144 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Moving away from a protocol that doesn’t allow every single application to log all inputs isn’t “a bit more control over what apps can and can’t access”.

    Every app already has full access to your home directory and can replace every other app simply by fiddling with $PATH. What you get with Wayland is at best a dangerous illusion of security.

    What’s the point then of a server-client architecture if I end up starting a dedicated server for every application?

    Flexibility. I can chose to sandbox things or not too. And given how garbage the modern state of sandboxing still is, I’d rather take that flexibility than being forced to sandbox everything.

    Anyway, to take a step back: Wayland doesn’t actual solve any of this. It just ignores it. Not having a way to record inputs or make screenshots does not improve security, it simply forces the user to find other means to accomplish those task, those means can then be utilized by any malicious app just the same. If you actual want to solve this issue you have to provide secure means to do all those task.


  • You also can’t prevent processes from manipulating each others inputs/outputs.

    That’s one of those pseudo-problems. In theory, yeah, a bit more control over what apps can and can’t access would be nice. In reality, it doesn’t really matter, since any malicious app can do more than enough damage even without having access to the Xserver. The solution is to not run malicious code or use WASM if you want real isolation. Xnest, Xephyr and X11 protocol proxy have also been around for a while, X11 doesn’t prevent you from doing isolation.

    Trying to patch sandboxing into Linux after the fact is not only not giving you isolation that is actually meaningful, it also restricts user freedom enormously. Screenshots, screen recording, screen sharing, keyboard macros, automation, etc. All very important things, suddenly become a whole lot more difficult if everything is isolated. You lose a ton of functionality without gaining any. Almost 15 years later and Wayland is still playing catch up to feature that used to “just work” in X11.


  • The thing is, what are the chances that those improvements needed a complete rewrite and couldn’t just be patched into X11? As for lack of screen tearing, is that even an advantage? In X11 to get rid of it I can do (dependents on driver, but AMD had it for ages):

    xrandr --output HDMI-0 --set TearFree on
    

    But more importantly, I can also do TearFree off to get a more responsiveness. Especially when it comes to gaming that is a very important option to have.

    There are also other things like CSD which I consider a fundamental downgrade to the flexibility that X11 offered.


  • Flatpak and Snap certainly go in the wrong direction, instead of being an upgrade and replacement for existing package managers, they are a crooked sidegrade, that solves some problems, while creating multiple new ones that used to be solved by older package managers. Flatpak making Gnome and KDE the only dependencies to exist is also pretty messed up.

    I don’t mind AppImages in this, as they never set up to be a new package manager format, but instead are just a way to bundle executables and dependencies into a single file for easier redistribution. You certainly don’t want to use that for all your packages, but as a quick&dirty workaround to get some semblance of cross-distribution packaging, with close to zero impact on the user, it’s quite good. It’s also one of the few formats that gives the user full control over up- and downgrades, as it’s all just simple files you can run and archive as you wish, it’s not a service that forces you to always use the latest thing.

    So yeah, Linux packaging is still a mess and it will probably take another decade or two before the dust has settled. Though I can’t shake the feeling that we have reached peak-Linux quite some years ago and it’s all downhill from here. Free Software principles aren’t exactly high priority for any company doing development in this space, and Free Software principles by itself aren’t even enough in a modern SAAS world to begin with.

    Somebody needs to write the book on what it means to be Free Software in the modern world, especially when it comes to online-services, distribution and reproducibility, aspects that have been largely ignored so far.


  • Wayland is a classic case of underspecification. They set out to replace X11, but their replacement only covered maybe 50% of what people were actually doing with X11, everything else was left as an exercise for the reader. That’s how you get this sluggish progress of the whole thing, as people will either ignore Wayland because it doesn’t work for their case, try ugly workarounds that will break in the long run or implement the thing properly, which in turn however might lead to multiple incompatible implementations of the same thing.

    This also creates a weird value proposition for Wayland, as it’s basically like X11, just worse in every way. Even 14 years later it is still struggling to actually replace the thing it set out to replace, let alone improve on it in any significant way.


  • AppImages are kind of harmless in this, as they are just bundled up binaries and dependencies. They don’t force you into a store, update system or even installing of the app, they are just files that sit on drive. They can be very useful if you want to quickly change between old and new versions of an app.

    They wouldn’t work for replacing a traditional Linux package manager, but as for portable Linux binaries, I quite like them.


  • lloram239@feddit.detoLinux@lemmy.mlWho does flatpak/snap benefit?
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    10 months ago

    Snap is just up Canonical trying to build an AppStore that they control so they get a bit of every software sale on Linux. It’s straight up evil. They neither support third party repositories nor is their AppStore server Open Source. It’s build such that they retain all the control and only employs the minimum amount of Open Source to get away with it.

    Flatpak is more tricky, I am not sure there is any company behind it actually controlling it directly. But it is very much build for KDE and Gnome apps. As a general Linux package manager it’s completely useless, as it has no dependency management, the only thing you can depend of are the KDE and Gnome runtimes, there is no separation of individual libraries and such. Support for more than one binary per package didn’t exist last time I checked, the support for command line in general is terrible and the whole thing is geared towards an Android’ish experience with simple monolithic apps you can click on. The fact that it runs on multiple distributions is great, but everything else about it is awful.

    There is also Nix, this is by far the best package manager we currently have. Runs on all distros, completely reproducible builds, git repositories can themselves be treated as actual packages, everything is easily overridable and changable by the user and best of all it is all build on regular Unix tooling, i.e. just some symlinks and environment variables, very transparent and easy to understand, no weird container magic that hides what is going on.





  • In the long run it doesn’t really matter if the LLM is or is not trained on all the information out there, as the LLM will be able to search the Web on demand and report back with what it finds. BingChat essentially already does that and we have a few summarizer bots doing similar jobs. The need to access Websites directly and wade through all the clickbait and ads in the hope you find the bit of information you are actually interested will be over.

    The LLM will be Adblock, ReaderMode, SQL and a lot more rolled into one, a Swiss army knife for accessing and transforming information. Not sure where that leaves the journalists, but cheap clickbait might lose a lot of value.


  • People actually have an internal reality.

    So do LLMs.

    Can an LLM do even something that simple?

    Ask it about any NSFW topic and it will refuse.

    analogize humans and LLMs when they truly are nothing alike.

    They seem way more similar than different. The part were they are different trivially follow from the LLMs architecture (e.g. LLMs are static, tokenizing makes character-based problems difficult, memory is limited to the prompt, no interaction with the external world, no vision, no hearing, …) and most of that can be overcome by extending the model, e.g. multi-model models with vision and hearing are on their way, DeepMind is working on models that interact with the real world, etc. This is all coming and coming fast.


  • it couldn’t even begin to make sense of the question “has this poem feline or canine qualities”

    Which is obviously false, as a quick try will show. Poems are just language and LLMs understand that very well. That LLMs don’t have any idea how cats actually look like or move, beyond what they can gather from text books, is irrelevant here, they aren’t tasked with painting a picture (which the upcoming multi-modal models can do anyway).

    Now there can of course be problems that can be expressed in language, but not solve in the realm of language. But I find those to be incredible rare, rare enough that I never really seen a good example. ChatGPT captures an enormous amount of knowledge about the world, and humans have written about a lot of stuff. Coming up with questions that would be trivial to answer for any human, but impossible for ChatGPT is quite tricky.

    And that’s the tip of the iceberg.

    Have you actually ever actually seen an iceberg or just read about them?

    It comes with rules how to learn, it doesn’t come with rules enabling it to learn how to learn

    ChatGPT doesn’t learn. It’s a completely static model that doesn’t change. All the learning happened in a separate step back when it was created, it doesn’t happen when you interact with it. That illusion comes from the text prompt, which includes both your text as well as its output, getting feed into the model as input. But outside that text prompt, it’s just static.

    “oh I’m not sure about this so let me mull it over”.

    That’s because it fundamentally can’t mull it over. It’s a feed forward neural network, meaning everything that goes in on one side comes out on the other in a fixed amount of time. It can’t do loops by itself. It has no hidden internal monologue. The only dynamic part is the prompt, which is also why its ability to problem solve improves quite a bit when you require it to do the steps individually instead of just presenting the answer, as that allows the prompt to be it’s “internal monologue”.




  • ChatGPT: I can perform certain types of reasoning and exhibit intelligent behavior to some extent, but it’s important to clarify the limitations of my capabilities. […] In summary, while I can perform certain forms of reasoning and exhibit intelligent behavior within the constraints of my training data, I do not possess general intelligence or the ability to think independently and creatively. My responses are based on patterns in the data I was trained on, and I cannot provide novel insights or adapt to new, unanticipated situations.

    That said, this is one area where I wouldn’t trust the ChatGPT one bit. It has no introspection (outside of the prompt), due to not having any long term memory. So everything it says is based on whatever marketing material OpenAI trained it with.

    Either way, any reasonable conversation with the bot will show that it can reason and is intelligent. The fact that it gets stuff wrong sometimes is absolutely irrelevant, since every human does that too.


  • I’ve previously been against trying Arch due to instability issues

    Skip Arch and go straight to NixOS if you are worried about that. Gives you most of the same advantages (huge up to date package collection) with none of the disadvantages (everything can be downgraded, patched, rolledback, etc. with ease).



  • they have no feelings so describing their limitations

    These kinds of articles, which all repeat exactly the same extremely basic points and make lots of fallacious ones, are absolute dogshit at describing the shortcomings of AI. Many of them don’t even bother actually testing the AI themselves, but just repeat what they heard elsewhere. Even with this one I am not sure what exactly they did, as Bing Chat works completely different for me from what is reported here. It won’t hurt the AI, but it certainly hurts me reading the same old minimum effort content over and over and over again, and they are the ones accusing AI of generating bullshit.

    The problem is what humans expect from LLMs and how humans use them.

    Yes, humans are stupid. They saw some bad sci-fi and now they expect AI to be capable of literal magic.


  • You can’t use the output of a ML algorithm for things you earn anything

    Better tell Adobe, as they are loading their Photoshop full of AI stuff.

    If you type “paint me an elephant”, yeah, you might not get copyright on that, but nobody would buy your elephant picture anyway. So that’s hardly an issue. The moment you actually produce something complex with the help of AI, there will be so many steps involved that you’ll get copyright on it no problem.

    And once the AI gets smart enough to produce complex things by itself, without a human hand holding it along the way, you’ll have bigger problems to worry about anyway. Since at that point the AI isn’t just replacing the artist, it’s replacing the whole media production chain. No more need to wait for Hollywood to make a movie, you can just tell your AI what you want to see and it will produce one on demand, customized specifically for you. What we see today is basically the beginnings of the Holodeck, endless on-demand entertainment customized for the user.