• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: July 17th, 2023

help-circle



  • I’m not saying AI can’t be disruptive. I’m saying we aren’t there. The steady progress you think you are seeing is bought with increased processing power, the science isn’t advancing steadily, it advances in unpredictable jumps. Because the performance gained with processing power is reaching its peak, we’ll need at least another one of those unpredictable jumps for it to get to a state that will do what the comment I was responding to was claiming. It could be another 50 years before that happens, or it could be tomorrow.



  • The current AI boom is all based on a single paper from about 7 years ago, and has been achieved by just throwing more and more computing power at it. There has been basically no meaningful architecture improvements in that time and we are already seeing substantial fall off from throwing more power at the problem. I don’t think its a given at all that we are close to the kind of disruption you are predicting.






  • I mostly disagree. It takes away a feeling of agency even if its still random at the end of the day. Just trust your players not to meta game. There are exceptions when it would be hard for the player to still get the intended experience when not meta gaming; but leaving the existence of that experience up to a roll in the first place is probably whats at fault.




  • Although my anecdote ended with additional data collection, the scary part is the manipulation of action. You might think that, as an example, they see you browsing a pokemon website and therefore show you more pokemon ads, something that coule be mutually beneficial. What you should be worried about, is something like based on your browsing behaviour they figure out how to manipulate your political action, or figure out your state of mental well-being and manipulate it. There is especially horror cases here when this is algorithm driven instead of being pushed by humans. One could imagine ,and I want to preface this by saying I’m not aware of this ever having happened, a machine learning algorithm relating signs of some mental illnesses with an uptick in firearm sales, and then increasing advertising of firearms to those people. You could imagine this driving things like an increased suicide rate.


  • You are thinking way too small about what can be done with that amount of data on you. I’ll give you an example. I once did some programming work for a website. The website got ‘hacked’ (An administrative admin had their password guessed because they weren’t using good password habits). This website had poor security and with the admins password the ‘hacker’ was able to get a DB dump. Bad stuff. So me and another guy set out trying to identify who had done it. Via server logs we were pretty sure we had correctly tied the ‘hacker’ to a user of the site. By looking at their activity on the site, and what referral links they had followed to get to the site previous, we learned where they approximately lived and their first name. But we knew we needed more info than that, so we looked at his hobbies and figured out he liked pokemon quite a lot. We then created a ‘what pokemon are you’ quiz, asking mostly unimportant questions, but throwing in a couple we needed in order to be able to report him to his local authorities (IE their last name and some other info I can’t remember off the top of my head). We then had this quiz get posted by an account not associated with the running of the site. the ‘hacker’ filled it out, and we reported him for breaking the law with our evidence to his local authorities.

    So to reiterate, 2 idiots with no background in data science and like 16 hour between us, were able to manipulate an arbitrary guy into doing what we wanted because of a relatively small amount of data. Now Imagine what people dedicating their lives to this stuff can do to you.