• 1 Post
  • 25 Comments
Joined 9 months ago
cake
Cake day: January 5th, 2024

help-circle
  • The flatpak documentation has a semi-relevant page on setting up a flatpak repo utilizing gitlab pages and gitlab’s CI runners on a pipeline. Obviously, you’d need to substitute Gitlab Pages for a webserver of your choice and to port the CI logic over to Gitea Actions (ensuring your Gitea instance is setup for it).

    A flatpak repo itself is little more than a web server with a related GPG key for checking the signatures of assembled packages. The docs recommend setting up the CI pipeline to run less on-commit to the package repos and more on the lines of checking for available updates on interval, though I imagine other scenarios in a fully-controlled environment such as a selfhosted one might offer some flexibility.


  • As I am teaching myself right now maintainable selfhost setups using popular apps (admittedly with Kubernetes vs something minimal in functionality like Docker Desktop), there is a lot of complexity involved in getting these services both functional and maintainable while also having to consider the security implications of various setups.

    While I agree the concept of self-host is a good thing to advocate, I think the complexity and difficulty involved not just to do it, but to do it right is going to be a straight cliff of a learning curve for those not already technically inclined in databases, networking, and filesystems/block storage.

    Honestly, taking the burden of being IT for a reasonable subscription cost for your efforts is a better way to go, especially if the setup allows for expanding your offerings to other members in a localized community.


  • jrgd@lemm.eetoLinux@lemmy.mlBest GUI VM software
    link
    fedilink
    arrow-up
    6
    ·
    24 days ago

    Alongside many others, I agree that using QEMU through GUI frontends like virt-manager or GNOME Boxes, or even server-focused solutions like Cockpit+VM plugin or Proxmox layered on top of your installation.

    I just want to note a decent point against other solutions like VirtualBox or the VMWare products that work on Linux: these solutions that don’t rely on QEMU almost certainly need the user to install out-of-tree kernel modules (that in some cases may also be proprietary). QEMU and its frontends don’t need out-of-tree modules in a majority of distros and can work out of the box with all features (given BIOS configuration of the host and hardware supports them).


  • I started dual booting Linux after an upgrade to an insider preview of Windows 10 soft-bricked my Windows 7 install. I later stopped booting into Windows and eventually reclaimed the partitions to extend whatever distro was installed at that point when the actual release of Windows 10 decided to attempt automatically upgrading my Windows 7 system, soft-bricking it a second time. 2016 onwards, I haven’t used Windows on my systems outside of occasionally booting LTSC in a VM.


  • jrgd@lemm.eetoLinux@lemmy.mlJava uses double ram.
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    27 days ago

    Running the same memory constraints on a 1.18 vanilla instance, most of the stack memory allocation largely comes from ramping the render distance from 12 chunks to 32 chunks. The game only uses ~0.7 GiB memory non-heap at a sane render distance in vanilla whereas ~2.0 GiB at 32 chunks. I did forget the the render distance no longer caps out in vanilla at 16 chunks. Far render distances like 32 chunks will naturally balloon the stack memory size.


  • jrgd@lemm.eetoLinux@lemmy.mlJava uses double ram.
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    28 days ago

    For clarification, this is Vanilla, a performance mod Fabric pack, a Fabric content modpack, Forge modpack, etc. that you are launching? If it’s the modpack that you describe needing 8gb of heap memory allocated, I wouldn’t be surprised the java stack memory taking ~2.7 GiB. If it’s plain vanilla, that memory usage does seem excessive.


  • jrgd@lemm.eetoLinux@lemmy.mlJava uses double ram.
    link
    fedilink
    arrow-up
    14
    ·
    28 days ago

    Depending on version and if modded with content mods, you can easily expect Minecraft to utilize a significant portion memory more than what you give for its heap. Java processes have a statically / dynamically (with bounds) allocated heap from system memory as well as memory used in the stack of the process. Additionally Minecraft might show using more memory in some process monitors due to any external shared libraries being utilized by the application.

    My recommendation: don’t allocate more memory to the game than you need to run it without noticeable stutters from garbage collection. If you are running modded Minecraft, one or more mods might be causing stack-related memory leaks (or just being large and complex enough to genuinely require large amounts of memory. We might be able to get a better picture if you shared your launch arguments, game version, total system memory, memory used by the game in the process monitor you are using (and modlist if applicable).

    In general, it’s also a good idea to setup and enable ZRAM and disable Swap if in use.


  • The VRR problems are specifically related to either monitors not supporting Freesync over HDMI or the user running a monitor expecting HDMI VRR to work on HDMI 2.1 specs (>4k@60hz or equivalent bandwidth negotiation requirements). I would concur a small subset of users is correct for the use-cases where this becomes a problem.






  • For many with unstable ISP connections, http downloads can get corrupted. Torrents are superior in this regard as the file gets split into blocks that each get checksummed for integrity after completion. This helps to ensure that the large iso is actually complete and won’t just be garbage on an attempted install. Even if you checksum the iso from http download, you have to pull the entire thing again if it is damaged whereas the torrent would just repull the damaged blocks automatically.




  • Tiny 11 comes in two variants:

    Tiny11 Core is not suitable for use on physical hardware as it outright disables updates. It’s best used for short-term VM instances.

    Tiny11 also has problems with updates. The advantages gained through Tiny11 will erode with applying Windows updates. The installer is more tolerable than Windows 11 by not forcing an online account (but still needing to touch telemetry settings). Components like Edge and One drive will inevitably rebuild themselves back in with cumulative updates. If this is something that coerces you to not update your system, don’t subject yourself to using Tiny11. Additionally Tiny11 fails to apply some cumulative updates out of the box, which could be a further security risk.

    I recently tested the main Tiny11 in a VM based on a different user recommending it in a now deleted thread. I was skeptical knowing the history of Tiny10 onward that 11 would actually be able to update properly, and NY findings backed up my initial skepticism of functional updates.


  • The worst gotchas and limitations I have seen building my own self-host stack with ipv6 in mind has been individual support by bespoke projects more so system infrastructure. As soon as you get into containerized environments, things can get difficult. Podman has been a pain point with networking and ipv6, though newer versions have become more manageable. The most problems I have seen is dealing with various OCI containers and their subpar implementations of ipv6 support.

    You’d think with how long ipv6 has been around, we’d see better adoption from container maintainers, but I suppose the existence of ipv6 in a world originally built on ipv4 is a similar issue of adoption likewise to Linux and Windows as a workstation. Ultimately, if self-rolling everything in your network stack down to the servers, ipv6 is easy to integrate. The more one offloads in the setup to preconfigured and/or specialized tools, the more I have seen ipv6 support fall to the wayside, at least in terms of software.

    Not to mention hardware support and networking capabilities provided by an ISP. My current residential ISP only provides ipv4 behind cgnat to the consumer. To even test my services on ipv6, I need to run a VPN connection tunneling ipv6 traffic to an endpoint beyond my ISP.


  • Based on how the script /usr/lib/kernel/install.d/99-grub-mkconfig.install (a script that runs on kernel installations) behaves, unless you are running in Xen Hypervisor or are on an architecture that doesn’t support it, Fedora by default expects to have GRUB_ENABLE_BLSCFG set to true. This script is provided by the package grub2-common, so it’s unlikely it can be removed without removing the GRUB bootloader’s management system entirely.

    More than likely, most customizations will work just fine with GRUB_ENABLE_BLSCFG set to true as long as you properly run grub-mkconfig (or just update-grub) after you make those changes so that they get applied to the bootloader portion of GRUB itself.

    If for some reason you do absolutely need to disable BLS in order to get the customization you want, the proper way to enforce grub-mkconfig on new kernels would be to write a script in the /usr/lib/kernel/install.d/ directory titled like 98-grub-manual-mkconfig.install that would forcibly run the proper mkconfig command after kernel installation and initramfs generation.



  • Checking inside /usr/lib/kernel/install.d/, you can see the mechanisms in place for installing new kernel entries. Not knowing what you did to your config (did you back it up before making changes?), you should check if the entries are being populated properly in /boot/loader/entries/. If they are, you have likely toyed with the BLS config in some way that broke being able to load dynamic entries without mkconfig.

    If that is indeed the case, I wouldn’t know exactly what you touched to break it, but this discussion forum might give some insight.

    If this isn’t the problem, it might be helpful to post your grub config minus any sensitive details to help determine what is going wrong.