• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle







  • I like this analogy; it’s provocative and it made me think about the issue for longer than I would have otherwise.

    However, after some thought, I don’t think it aligns perfectly since the user can simply choose not to read the article, so there’s an option where they don’t get fucked.

    In the same vein, I think we could make a better analogy to sexting. You meet someone, seem to hit it off, and when the texts and pictures get a little spicy, they hit you with a, “you can pay me now and I will keep all of this in my private spank-bank, otherwise I’m going to share our entire relationship with a group chat I’m in with 1200+ people”

    I think this is a bit stronger because it hits on a few notes where the hook-up analogy falls short: sharing of sensitive information, extortion in exchange for gratification, and the potential for an ongoing relationship.

    Idk, what do you think?



  • Probably because at the end of the day:

    1. Most people don’t have the tools or desire to figure out how to run an LLM locally.
    2. What if I run a local LLM on my PC and I leave my home? Do I now need to learn how to deploy a VPN at home so I always have access? I could do this, but I don’t want to. Oh, you know a model that runs on Android? What if I have an iPhone?
    3. Proton is a for-profit business that surveyed their customers and got feedback that customers wanted a writing assistant. This one seems the most important.

  • They’re arresting tourists from other countries for being pregnant? On the basis that they might… go home and get an abortion? I don’t completely follow.

    You know this thread is about US federal immigration, right?

    I’m not super in tune with everything that happens in the backwards US states but this doesn’t sound like something that is happening. Yes, I’ve heard that some states are or have inquired about getting data from health apps about period tracking, and I’ve read the articles about the nefarious ways that they could use that data, but I’ve seen nothing about the impact that could have on tourists.




  • I disagree, these children are minors and the their behavior, while abhorrent, belies a fundamental lack of perspective and empathy.

    I’ve been a teenage boy before and I did some bone-headed things. Maybe not this bad, but still, I agree with the judge in this instance that it would be inappropriate to impose permanent consequences on these kids before their life even gets started because they were stupid, horny, teenage boys.

    Even if we assume that these kids don’t all have well-meaning parents who who will impose their own punishments, having a probation officer in high school is not going to help with popularity. Then, mandatory classes that will force these boys to evaluate the situation from another perspective seems like a great add-on.

    I know it doesn’t feel like justice, but our goal as a society shouldn’t be to dole out maximum punishment in every instance. The goal is to allow all of us to peacefully coexist and contribute to society - throwing children in a dark hole somewhere to be forgotten isn’t going to help with that.

    Having said all of the above, it feels like a good time to emphasize that we still don’t have any good ideas for solving the core problem here, which is the malicious use of this technology that was dumped on society without any regard for the types of problems that it would create, and entirely without a plan to add guard rails. While I’m far from the only one considering this problem, it should be clear enough by now that dragging our feet on creating regulation isn’t getting us any closer to a solution.

    At a minimum it feels like we need to implement a mandatory class on the responsible use of technology, but the obvious question there is how to keep the material relevant. Maybe it’s something that tech companies could be mandated to provide to all users under 18 - a brief, recurring training (could be a video, idc) and assessment that minors would have to complete quarterly to demonstrate that they understand their responsibilities.



  • Is there any chance you’re at a kbbq or hotpot restaurant? Because then you get to cook the meal yourself, which is arguably chef-like.

    Jokes aside, I see the comparison you’re making and it’s not a bad one. I’d counter by giving the example of a menu - when you get to a restaurant you’re given a menu with text descriptions of the food you can receive from the kitchen. Since this is an analogy and not an exact comparison, let’s say that a meal on the menu is like the starting point of the workflow I described.

    Based on that you have an idea of what the output will be when you order - but let’s say you don’t like mushrooms and you prefer your sauce on the side. When you make your order you provide those modifications - this is like inpainting.

    Certainly you’re not a ‘chef’, but if the dish you design is both bespoke and previously unimaginable, I’d argue that at the very least you contributed to the creative process and participated in creating something new that matches your internal vision.

    Not exactly the same but I don’t think it’s entirely different.


  • Not OP but familiar enough with open source diffusion image generators to be able to chime in.

    Now I’d argue that being an artist comes down to being able to envision something in your mind’s eye and then reproduce it in the real world using some medium, whether it’s a graphite pencil, oil paint, a block of marble, Wacom tablet on a pc, or even through a negotiation with an AI model. Your definition might be different, but for the sake of conversation this is how I’m thinking about it.

    The work flow for an AI generated image can have a few steps before feeling like it sufficiently aligns with your vision. Prompting for specific details can be tricky, so usually step 1 is to generate the basic outline of the image you’re after. Depending on your GPU or cloud service, this could take several minutes or hours before you get a basis that you can work with. Once you have the basic image, you can then use inpainting tools to mask specific areas of the image and change specific details, colors, etc. This again can take many many generations before you land on something that sufficiently matches your vision.

    This is all also after you go through the process of reviewing and selecting one of the hundreds of models that have been trained specifically for different types of output. Want to generate anime-style art? There’s a model for that, want something great at landscapes? There’s a different one for that. Surely you can use an all-purpose model for everything, but some models simply don’t have the training to align to your vision, so you either choose to live with ‘close enough’ or you start downloading new options, comparing them with your existing work flow, etc.

    There’s certainly skill associated with the current state of image generation. Perhaps not the same level of practice you need to perfectly represent a transparent veil in graphite, but as with other formats I have a hard time suggesting that when someone represents their vision in the real world that it’s automatically “not art”.



  • It sounds like someone got ahold of a 6 year old copy of Google’s risk register. Based on my reading of the article it sounds like Google has a robust process for identifying, prioritizing, and resolving risks that are identified internally. This is not only necessary for an organization their size, but is also indicative of a risk culture that incentivizes self reporting risks.

    In contrast, I’d point to an organization like Boeing, which has recently been shown to have provided incentives to the opposite effect - prioritizing throughput over safety.

    If the author had found a number of issues that were identified 6+ years ago and were still shown to be persistent within the environment, that might be some cause for alarm. But, per the reporting, it seems that when a bug, misconfiguration, or other type of risk is identified internally, Google takes steps to resolve the issue, and does so at a pace commensurate with the level of risk that the issue creates for the business.

    Bottom line, while I have no doubt that the author of this article was well-intentioned, their lack of experience in information security / risk management seems obvious, and ultimately this article poses a number of questions that are shown to have innocuous answers.