Not sure if that’s a quirk of your particular laptop but I’ve been using a thunderbolt to dual displayport adapter for years and it works great out of one port to drive a pair of 240hz 1440p displays.
Not sure if that’s a quirk of your particular laptop but I’ve been using a thunderbolt to dual displayport adapter for years and it works great out of one port to drive a pair of 240hz 1440p displays.
LLMs are conversation engines (hopefully that’s not controversial).
Imagine if Google was a conversation engine instead of a search engine. You could enter your query and it would essentially describe, in conversation to you, the first search result. It would basically be like searching Google and using the “I’m feeling lucky” button all the time.
Google, even in its best days, would be a horrible search engine by the “I’m feeling lucky” standard, assuming you wanted an accurate result and accurate means “the system understood me and provided real information useful to me”. Google instead return(ed)s(?) millions or billions of results in response to your query, and we’ve become accustomed to finding what we want within the first 10 results back or, we tweak the search.
I don’t know if LLMs are really less accurate than a search engine from that standpoint. They “know” many things, but a lot of it needs to be verified. It might not be right on the first or 2nd pass. It might require tweaking your parameters to get better output. It has billions of parameters but regresses to some common mean.
If an LLM returned results back like a search engine instead of a conversation engine, I guess I mean it might return billions of results and probably most of them would be nonsense (but generally easily human-detectable) and you’d probably still get what you want within the first 10 results, or you’d tweak your parameters.
(Accordingly I don’t really see LLMs saving all that much practical time either since they can process data differently and parse requests differently but the need to verify their output means that this method still results in a lot of back and forth that we would have had before. It’s just different.)
(BTW this is exactly how Stable Diffusion and Midjourney work if you think of them as searching the latent space of the model and the prompt as the search query.)
edit: oh look, a troll appeared and immediately disappeared. nothing of value was lost.
good to know that alito wasn’t part of a crazy partisan movement or anything
Are you asking me why is paint the way it is? I don’t know, take it up with nature, but stop spreading misinformation.
Red/yellow/blue are the primary colors for paints (as distinct from dyes/pigments, that’s CMY(k) and as distinct from light, that’s RGB).
Exceptionally flexible working hours (so I don’t have to go in or leave during rush hour) along with an exceptional increase in pay and a commitment that time in office would actually be used for productive time (any meetings can be done remotely and often are done with remote members anyway).
Actually, as I think about it, the only thing that would lure me back to the office would be an open door policy related to being there, where I’d see myself there maybe 5% of the time. shrug
Can you take this “doubling down on some inane point” shit back to the R site though?
Solid advice, thanks. I think I’d best be able to help out on react (maybe native) or web apps, or the iOS space since I don’t have a daily driver android device (but if I did I’d jerboa looks fun) but I can help with UX on any platform
I’m pretty familiar with Material. Wondering how (specifically) I can help. I’ve used and followed some open source projects but I’ve never contributed.
Same. We should form a collective lol. (I have some dev experience also)
and get on a list for using the national football league’s photographic assets without express written permission? facebanned