• 0 Posts
  • 43 Comments
Joined 1 year ago
cake
Cake day: July 10th, 2023

help-circle
  • I disagree.

    Both sides are bad, no matter who is currently the aggressor.

    Now because there is aggression, the aggressor has an obligation to stop it, and we have an obligation to force a stop in the conflict as well. But that doesn’t make the other party less Bad in this. Both sides killed a lot of innocent people, both have inhumane ulterior motives and both are supporting further escalation. But ofc if there’s only one party doing the fighting, then that’s the party that acutely needs to be stopped.

    This distinction is very important to me, because you are not suddenly the good guy because you stopped killing civilians. You are just not actively doing war crimes which means we don’t have to intervene because of you anymore, which is at least one less reason. But you are not holy because “this year it was only 300 war crimes”.





  • Well what an interesting question.

    Let’s look at the definitions in Wikipedia:

    Sentience is the ability to experience feelings and sensations.

    Experience refers to conscious events in general […].

    Feelings are subjective self-contained phenomenal experiences.

    Alright, let’s do a thought experiment under the assumptions that:

    • experience refers to the ability to retain information and apply it in some regard
    • phenomenal experiences can be described by a combination of sensoric data in some fashion
    • performance is not relevant, as for the theoretical possibility, we only need to assume that with infinite time and infinite resources the simulation of sentience through AI needs to be possible

    AI works by telling it what information goes in and what goes out, and it therefore infers the same for new patterns of information and it adjusts to “how wrong it was” to approximate the correction. Every feeling in our body is either chemical or physical, so it can be measured / simulated through data input for simplicity sake.

    Let’s also say for our experiment that the appropriate output it is to describe the feeling.

    Now I think, knowing this, and knowing how good different AIs can already comment on, summarize or do any other transformative task on bigger texts that exposes them to interpretation of data, that it should be able to “express” what it feels. Let’s also conclude that based on the fact that everything needed to simulate feeling or sensation it can be described using different inputs of data points.

    This brings me to the logical second conclusion that there’s nothing scientifically speaking of sentience that we wouldn’t be able to simulate already (in light of our assumptions).

    Bonus: while my little experiment is only designed for theoretical possibility and we’d need some proper statistical calculations to know if this is practical in a realistic timeframe already and with a limited amount of resources, there’s nothing saying it can’t. I guess we have to wait for someone to try it to be sure.