• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • No, I’m not. Saying Solution B is economically more feasible than Solution C is not an argument in favour of Solution A, even if A is cheaper than B or C. Because cost argument is not the only factor.

    Had you actually read my comment, you’d see I’m pro-nuclear, and even more pro-renewables.

    Why don’t you check your own biases and preconceptions for a second and read what I actually wrote instead of what you think I wrote. I could just as easily call you an anti-renewable shill for nuclear pollution, using precisely the same argument you used. It’s not valid.

    Hint: if you ever find yourself arguing with “people like you…” – you’ve lost the argument. Try dropping the right-wing knee-jerk rhetoric and start thinking.


  • Unfortunately, it’s not as simple as that. Theoretically, if everyone was using state-of-the-art designs of fast-breeder reactors, we could have up to 300,000 years of fuel. However, those designs are complicated and extremely expensive to build and operate. The finances just don’t make it viable with current technology; they would have to run at a huge financial loss.

    As for Uranium for sea-water – this too is possible, but has rapidly diminishing returns that make it financially unviable quite rapidly. As Uranium is extracted and removed from the oceans, exponentially more sea-water must be processed to continue extracting Uranium at the same rate. This gets infeasible pretty quickly. Estimates are that it would become economically unviable within 30 years.

    Realistically, with current technology we have about 80-100 years of viable nuclear fuel at current consumption rates. If everyone was using nuclear right now, we would fully deplete all viable uranium reserves in about 5 years. A huge amount of research and development will be required to extend this further, and to make new more efficient reactor designs economically viable. (Or ditch capitalism and do it anyway – good luck with that!)

    Personally, I would rather this investment (or at least a large chunk of it) be spent on renewables, energy storage and distribution, before fusion, with fission nuclear as a stop-gap until other cleaner, safer technologies can take over. (Current energy usage would require running about 15000 reactors globally, and with historical accident rates, that’s about one major nuclear disaster every month). Renewables are simpler, safer, and proven ,and the technology is more-or-less already here. Solving the storage and distribution problem is simpler than building safe and economical fast-breeder reactors, or viable fusion power. We have almost all the technology we need to make this work right now, we mostly just lack infrastructure and the will to do it.

    I’m not anti-nuclear, nor am I saying there’s no place for nuclear, and I think there should be more funding for nuclear research, but the boring obvious solution is to invest heavily in renewables, with nuclear as a backup and/or future option. Maybe one day nuclear will progress to the point where it makes more sound sense to go all in on, say fusion, or super-efficient fast-breeders, etc. but at the moment, it’s basically science fiction. I don’t think it’s a sound strategy to bank on nuclear right now, although we should definitely continue to develop it. Maybe if we had continued investing in it at the same rate for the last 50 years it might be more viable – but we didn’t.

    Source for estimates: “Is Nuclear Power Globally Scalable?”, Prof. D. Abbott, Proceedings of the IEEE. It’s an older article, but nuclear technology has been pretty much stagnant since it was published.





  • The modern definition we use today was cemented in 1998, along with the foundation of the Open Source Initiative. The term was used before this, but did not have a single well-defined definition. What we might call Open Source today, was mostly known as “free software” prior to 1998, amongst many other terms (sourceware, freely distributable software, etc.).

    Listen again to your 1985 example. You’re not hearing exactly what you think you’re hearing. Note that in your video example the phrase used is not “Open-Source code” as we would use today, with all its modern connotations (that’s your modern ears attributing modern meaning back into the past), but simply “open source-code” - as in “source code that is open”.

    In 1985 that didn’t necessarily imply anything specific about copyright, licensing, or philosophy. Today it carries with it a more concrete definition and cultural baggage, which it is not necessarily appropriate to apply to past statements.


  • In the latest version of the emergency broadcast specification (WEA 3.0), a smart phone’s GPS capabilities (and other location features) may be used to provide “enhanced geotargeting” so precise boundaries can be set for local alerts. However, the system is backwards compatible – if you do not have GPS, you will still receive an alert, but whether it is displayed depends on the accuracy of the location features that are enabled. If the phone determines it is within the target boundary, the alert will be displayed. If the phone determines it is not within the boundary, it will be stored and may be displayed later if you enter the boundary.

    If the phone is unable to geolocate itself, the emergency message will be displayed regardless. (Better to display the alert unnecessarily than to not display it at all).

    The relevant technical standard is WEA. Only the latest WEA 3.0 standard uses phone-based geolocation. Older versions just broadcast from cell towers within the region, and all phones that are connected to the towers will receive and display the alerts. You can read about it in more detail here.


  • I understand the concerns about Google owning the OS, that’s my only worry with my chromebook. If Google start preventing use of adblockers, or limiting freedoms in other ways that might sour my opinion. But the hardware can run other OSs natively, so that would be my get-out-of-jail option if needed.

    I’ve not encountered problems with broken support for dev tools, but I am using a completely different tool chain to you. My experience with linux dev and cross-compiling for android has been pretty seamless so far. My chromebook also seems to support GPU acceleration through both Android and Linux VMs, so perhaps that is a device-specific issue?

    I’m certainly not going to claim that chromebooks are perfect devices for everyone, nor a replacement for a fully-fledged laptop or desktop OS experience. For my particular usage, it’s worked out great but YMMV, my main point is that ChromeOS isn’t just for idiots as the poster above seemed to think.

    Also, a good percentage of my satisfaction with it is the hardware and form-factor rather than ChromeOS per se. The same device running Linux natively would still tick most of my boxes, although I’d probably miss a couple of android apps and tablet mode support.


  • People who use Chromebooks are also really slow and aren’t technically savvy at all.

    Nonsense. I think your opinion is clouded by your limited experience with them.

    ChromeOS supports a full Debian Linux virtual machine/container environment. That’s not a feature aimed at non-tech-savvy users. It’s used by software developers (especially web and Android devs), linux sysadmins, and students of all levels.

    In fact I might even argue the opposite: a more technically-savvy user is more likely to find a use case for them.

    Personally, I’m currently using mine for R&D in memory management and cross-platform compiler technology, with a bit of hobby game development on the side. I’ve even installed and helped debug Lemmy on my chromebook! It’s a fab ultra-portable, bullet proof dev machine with a battery life that no full laptop can match.

    But then I do apparently have an IQ of zero, so maybe you’re right after all…