• 0 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle
  • https://cepr.net/data-from-bolivias-election-add-more-evidence-that-oas-fabricated-last-years-fraud-claims/

    The USA, through the OAS, fabricated a bogus statistics report acussing Evo of stealing the elections. This was the time when the Elon rat tweeted that “they would coup whoever the wanted”. Even at the time, a lot of staticians were reporting that the analysis was bogus. The awful woman they put in charge ordered at least 2 massacres against people protesting the coup. And as soon as she let the country had elections, Evo’s party won by more than 50% of the vote. Keep in mind that unlike the USA, there are more than 2 parties in Bolivia, so they had more than 20% lead in the vote. It’s very obvious that Bolivians want Evo and his party in power. The OAS and USA are full of shit as usual.

    The coup at least resulted in 2 term limits being implemented in Bolivia so Evo can’t stay in power forever, so not all bad. But the USA can’t stop fucking in latin america.


  • And Apple has earned any trust? Jesus christ people, like less than 2 months ago they were caught restoring “deleted” photos from iCloud to user devices hahahahaha. Of course fans were excusing them talking about disk sectors like that has anything to do with cloud storage being available accidentally hahahaha.

    But yeah, Apple cult followers will find a way to justify surrendering even more freedom to Apple with this BS for sure. And they will be paying top dollar for the pleasure hahahaha.


  • What a load of BS hahahaha. LLMs are not conversation engines (wtf is that lol, more PR bullshit hahahaha). LLMs are just statistical autocomplete machine. Literally, they just predict the next token based on previous tokens and their training data. Stop trying to make them more than they are.

    You can make them autocomplete a conversation and use them as chatbots, but they are not designed to be conversation engines hahahaha. You literally have to provide everything in the conversation, including the LLM previous outputs to the LLM, to get them to autocomplete a coherent conversation. And it’s just coherent if you only care about shape. When you care about content they are pathetically wrong all the time. It’s just a hack to create smoke and mirrors, and it only works because humans are great at anthropomorphizing machines, and objects, and …

    Then you go to compare chatgpt to literally the worst search feature in google. Like, have you ever met someone using the I’m feeling lucky button in Google in the last 10 years? Don’t get me wrong, fuck google and their abysmal search quality. But chatgpt is not even close to be comparable to that, which is pathetic.

    And then you handwave the real issue with these stupid models when it comes to search results. Like getting 10 or so equally convincing, equally good looking, equally full of bullshit answers from an LLM is equivalent to getting 10 links in a search engine hahahaha. Come on man, the way I filter the search engine results is by reputation of the linked sites, by looking at the content surrounding the “matched” text that google/bing/whatever shows, etc. None of that is available in an LLM output. You would just get 10 equally plausible answers, good luck telling them apart.

    I’m stopping here, but jesus christ. What a bunch of BS you are saying.