Arguing with an LLM

He would not. He and I have good communication and he knows what I think about LLMs. This post isn’t meant as a dig on him either, and I don’t think he would see it that way. I actually thought about sending him the link so he could chime in.

3 Likes

I doubt the crappy AI summary would even pick up on all these important details.

1 Like

You should point your colleague to the terms of use of an LLM he’s putting his trust in. Here’s an example from Copilot’s tou:

Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.

https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/termsofuse

I’m pretty sure all of them have similar disclaimers buried somewhere in the smallprint.

Now, techno-optimists, read that again and let it sink in: The creators of this supposedly world-changing “artificial intelligence” themselves claim it is “for entertainment purposes only”

Although, in my experience, reasoning with ai-bros is like negotiating with junkies. Not saying that your client is either :smiley:

6 Likes

It’s much easier to negotiate with street fent and meth users : I do it for a living these days, working in harm reduction :wink:

4 Likes

I like the cut of that sailor’s jib…

https://kotaku.com/delivery-bot-that-took-human-job-asks-man-for-help-man-says-fk-you-2000685226

That’s hilarious. I like that he just left it stranded. It really is a corporation relying on people treating robots like people to make the corp money. Reminds me of Asimov’s three laws of Robotics. Although the robots don’t have those laws. But there were stories of humans beating up robots for stealing jobs and they couldn’t fight back. Seems similar, though less violent.

1 Like

Apparently Microsoft heard about your post @normalized and they’re scrambling to refute it. LOL

Microsoft says Copilot isn’t just ‘for entertainment purposes’ after its terms of service language goes viral

4 Likes

These threads are NOT helping with “for entertainment purposes only” alligations. Not like I’m complaining.

1 Like

@normalized = @tomshardware confirmed :face_with_peeking_eye:

2 Likes

I read that as “Shardware” and it had be belly laughing.

3 Likes

ROFL! What has this world come to!? The largest existing company by market cap cannot keep TOU for its “most valuable” and “cutting edge” product up to date and somehow oversaw there’s some “legacy language” in there? :rofl:

The disclaimer is still there. I doubt their lawyers will let them remove it from the text. If anything, they’ll just camouflage it in denser legalese. We’ll see…

The Microslop damage control team is off the hook lately. What are they going to do next? Tell people not to use word “slop”? :rofl:

8 Likes

:rofl: Great job with that one! *they may just very well tell people not to say slop, espeeecially Microslop, this is great news to me, goes to show dont have a disclaimer if you dont mean it truly :joy:

3 Likes

I think this works for game dev.

I’m side-eyeing this approach for code going into long-term production that will receive years of maintenance by multiple people.

I mean I agree, but keep in mind I’m fixing it so it hopefully doesn’t become a maintenence nightmare.

I’m trying to look at it more like he’s telling the LLM what he wants and making something that’s a prototype he can show me. So instead of me having to ask a bunch of clarifying questions, I can see what’s in his head.

1 Like

Imagine writing this on your employment contract as a software developer at a tech company.

My code contributions are for entertainment purposes only. I can make mistakes, and my code may not work as intended. Don’t rely on me for important advice. Use my work at your own risk.

10 Likes

I bet some dumb manager will still hire them regardless.

Meanwhile, I can’t get a job at Walmart.

3 Likes

LOL. That really puts things into perspective.

3 Likes

At least it means that programmers that actually know what they are doing will become more sought after, assuming people dont put up with unmilled AI oats.
Really you correcting LLM code is a genius idea in my oppinion.

2 Likes

Kinda reads like the EULA to mission critical software that still offers no warranty of any kind, smirk

3 Likes

There’s two problems with that. First, someone likened using LLMs to create stuff with pottery. You can buy pottery at a farmer’s market and get handmade, one-of-a-kind stuff that’s more expensive. Or you can go to Pottery Barn or Target and get mass-produced vases, etc for a cheaper price.

LLMs are likely to become “good enough” for a lot of people.

LLM code quickly becomes unreadable. It’s frustrating enough when you can ask a former programmer, “WTF were you thinking?” Or figure out how they made the mess and what they were thinking. LLMs don’t think. They just squirt out code like a Playdoh playset. Also, fixing other people’s code is the WORST part5 of the job. So, as long as it’s lucrative I will probably do it, but it is literally soul-destroying. And it’s already causing burnout across the industry.

It is basically quality control on an assembly line, making sure that all the cans have the label applied properly - mind numbing. Also, as a human you’re trying to keep up with an LLM that outputs code at a factor of probably 1 million to 1 what you can produce in the same time. Which means you as the human are perceived as the bottleneck and constantly pushed to “go faster”.

My favorite EULA of all time was the one where you promised them your immortal soul: https://asiaiplaw.com/article/gamestation-collects-7500-souls-with-new-eula

2 Likes