verdare [he/him]

Hopeless yuri addict.

  • 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • Eliminating vehicle deaths by making travel impossible

    And here we see decades of automobile industry propaganda in action. There is only the car, or no mobility whatsoever. You remember how everybody was just trapped inside their houses for centuries until the Ford factories started cranking out Model Ts?

    Cars will never be a sustainable solution to mass transit. The immense amount of waste in materials, energy, and land use will not be offset with AVs. I don’t think AVs are a bad idea in and of themselves. But, as the article points out, they’re not going to solve any major problems.

    I had never really considered how induced demand would apply to AVs…




  • I find it rather disingenuous to summarize the previous poster’s comment as a “Roko’s basilisk”scenario. Intentionally picking a ridiculous argument to characterize the other side of the debate. I think they were pretty clear about actual threats (some more plausible than others, IMO).

    I also find it interesting that you so confidently state that “AI doesn’t get better,” under the assumption that our current deep learning architectures are the only way to build AI systems.

    I’m going to make a pretty bold statement: AGI is inevitable, assuming human technological advancement isn’t halted altogether. Why can I so confidently state this? Because we already have GI without the A. To say that it is impossible is to me equivalent to arguing that there is something magical about the human brain that technology could never replicate. But brains aren’t magic; they’re incredibly sophisticated electrochemical machines. It is only a matter of time before we find a way to replicate “general intelligence,” whether it’s through new algorithms, new computing architectures, or even synthetic biology.


  • The only danger to humans is humans.

    I’m sorry, but this is a really dumb take that borders on climate change denial logic. A sufficiently large comet is an existential threat to humanity. You seem to have this optimistic view that humanity is invincible against any threat but itself, and I do not think that belief is justified.

    People are right to be very skeptical about OpenAI and “techbros.” But I fear this skepticism has turned into outright denial of the genuine risks posed by AGI.

    I find myself exhausted by this binary partitioning of discourse surrounding AI. Apparently you have to either be a cult member who worships the coming god of the singularity, or think that AI is either impossible or incapable of posing a serious threat.





  • Yeah, Valve has put a lot of effort into bridging the compatibility gap for Linux. Most of that work could also be ported to macOS, but they just don’t care.

    It’s a shame, because getting 32-bit to 64-bit compatibility working would help Linux as well. I don’t know how much longer distros want to keep supporting 32-bit libraries, and some distros have already dropped them.

    That said, macOS compatibility seems like a non-sequitur for an article calling Steam a “time bomb.” DRM is definitely the bigger issue here.





  • I firmly disagree with this post. People should not just “rely on their instincts,” which have proven time and again to be highly inaccurate and subject to bias. This is starting to look like what those “body language experts” do, and those people have lower accuracy than a coin toss in controlled experiments.

    The only reliable way to tell if someone is lying is through actual evidence. What we know so far certainly paints LMG in a bad light, but I will continue to wait for more information to come out.


  • LLMs do replicate a small subset of human cognition, but not the full scope. This can result in human-like behavior, but it’s important to be aware of the limitations.

    The biggest limitation is the misalignment in goals. LLMs won’t perform a very deep analysis of their input because they don’t need to. Their goal isn’t honest discussion, a pursuit for truth, or even having a coherent set of beliefs about the world. Their only goal is to sound plausible. And, as it turns out, it’s not too hard to just bullshit your way through the Turing test.