Has anyone else noticed that any topic related to AI has a lot of engagement on this forum?

I’m not saying this is a strictly a bad thing, but when anybody brings up AI in any way on this forum, engagement (and conflict) SKYROCKET!


The most recent example is a discussion about the AI bubble. It already has 60 replies/500 views and counting:

Then I’d say the most infamous AI related post was the one the caused copy/paste AI responses to be banned. (Which I very much agree with):

This is the only thread that I’ve ever seen manually closed by a mod. (With my funny suggestion, of course):


Here are even more examples:


Again, no shade here, but I’m just pointing out the pattern.

3 Likes

People got opinions about it I guess? It is talked about a lot in relation to programming so maybe thats also a factor.

I just use it when I need assistance on learning a topic and to find logic bugs but other than that I find it pretty useless when I tried to vibe code with it a few months ago.

1 Like

I don’t see that this is weird with the major push from especially the US government into “AI” and how it can seriously affect this industry, and already is with AI slop being pushed out (including platforms banning or labeling it) and people losing their jobs or not being able to get jobs because of it

3 Likes

Heh, it’s an hot topic in the tech world, so it’s not surprising that people have opinions to share about it.

Regarding my latest topic (the bubble one), I had noticed this pattern too, and since the recent news regarding to AI, I was starting to have a lot of questions and opinions I needed to share somewhere, so I decided to make a topic here and see what responses I would get (and on a really really really secondary note I was legit curious on how much traction a topic like this would get on the forum lol).

And the results were AMAZING, I saw many people responding with elaborate responses that made me learn some things I didn’t know and in some cases even something about the incredible users that made those responses.

I even saw people coming back to the topic to add more of what they had to say because the topic had stuck in their mind (and I guess in yours too, as my topic seems the principal one leading to this topic), and this made me realise that my and everyone else’s questions and opinions weren’t just words thrown in the air but really left something to others, and it’s beautiful.

And now the topic is sitting here, at 60+ responses and more than 500 views, so many things have been said and I didn’t manage to respond to them all myself, and I think that’s okay because this isn’t me asking a question and receiving a response anymore, it’s a full-on conversation lead by many people about the subject and it demonstrates how incredible our community really is.

I wanna thank all the fantastic people who responded to my topic so far and everyone that might respond before the topic fully dies out.

2 Likes

Yes.

It is a hot topic.

This is a lovely community.

It is very relevant to what we do here.

It is nice to hear the people that you like and respect opinions on it.

7 Likes

Well yeah its something that will almost certainly take most jobs in the next 10 years or so and will end up with hundreds of millions losing their careers, let alone their jobs.

Thats before we start talking about fake news/images/videos/sound etc. that will be used by governments.

Lets put it this way, I moved from Windows to Linux to avoid this crap, and I am the biggest Windows fanboy you will find.

I disagree. Kinda.

First of all, there’s some jobs that AI can not substitute in ANY capacity, and, unless we end up with AI employing itself, it won’t be hundreds of million of people losing their job, it’ll definitely be millions, but hundreds of millions? Maybe I’m underestimating the number of people who’s job can be done by AI but if most companies drop AI (when it finally finds it’s place and it’s a good investment only in specific sectors) then we’ll be kinda safe I guess.

2 Likes

In my opinion the main risk is not AI actually replacing people, but companies thinking it can, and firing people or not hiring people (only to find that they made a mistake, already happened several times this year) or companies offering worse working conditions or lower pay because “do you want us to replace you with AI?” (has also happened)

7 Likes

Yes, the issue is that companies think that AI is the solution to most their problems right now, their goose that lays golden eggs.

All the companies that have adopted AI in their company as a tool (and not making it) are ones that think that it can do anything they need on par with workers or even better, and then end up making the mistake of firing people or not hiring.

And I have reasons to believe that the people in charge of some of these companies probably don’t even UNDERSTAND what AI actually is.

4 Likes

Respectively I believe it will destroy most white collar jobs, in the UK and US alone thats well over 100 million jobs. Sure some will be safe, but once AI can replace most office processes why would you need a worker?

So it probably isnt far off that if we take the global figure.

Blue collar, sure they will be safe for now anyway.

There’s no evidence of that, most of those tasks actually require human intervention, active work, and planning, creativity and flexibility, you’d at the absolute minimum need humans supervising them

”AI” doesn’t think, it just builds on language guessing, also you still need to prompt them

Most tasks that can reasonably be automated already has with far better tools than glorified “guess the next word” machines

Remember, we’re supposed to be in paperless offices for over 30 years now, at least it made sense reality just got in the way

2 Likes

Most actual pointless white collar jobs can’t be replaced because they do purely human tasks, or legal tasks, or are there just because the industry wants them or someone wants a job for their cousin

The jobs they’ll try to replace are the actual real jobs like programmers, designers, writers, etc.

And they’ll create more work for the ones that remain, like has happened with self checkout

2 Likes

The problem with this statement is that it assumes you will get 0 hallucinations from the AI the whole time it is churning through whatever it needs to (lets say numbers).

But hallucinations haven’t been solved yet since GPT 3-5 (almost 6 years). and when a model does hallucinate it doesn’t even know it did unless pointed out. I feel like that is a big problem without a human checking every single output and comparing the data (which would be too much if the output is cranked up).

If you look at this white paper

  • in section 3.2 - 3.5.2 where they look at 120 enterprise AI’s they highlight in what ways they examine hallucination
  • even if they have a human look at the data if you look at section 3.4 point 1 you can see generally people who don’t understand AI would accept the data outputted and let it go into the system
  • In 7.2 they show ways to mitigate hallucination and how much they have reduced hallucination (i’ve tried this, it is not easy, as you have to normally organize your data into JSONL and make sure the data is formatted correctly, and in a way that doesn’t prompt the wrong data by accident and then transform it into a Vector Database. :frowning: its tough)
  • Most of the data for this white paper is from Mid 2024 - Mid 2025
5 Likes

I think that at least programmers in the cybersecurity side will be safe, because at least for now, AI has no way of thinking outside of the box like needed for cybersecurity, unless we miraculously reach ASI in a few years.

Don’t underestimate the number of jobs at risk. Nor the fact most people would happily not work if it were possible.

Imagine full self driving with AI for instance (if it existed properly). Bye bye lorry drivers, delivery drivers, tractor drivers, bus drives, taxi drivers, chauffeurs - that is a whole lot of people for just one AI application. (Not to mention pilots).

Imagine if robots reached the point they are aiming for with AI brains. Bye bye warehouse workers, factory workers, delivery workers, shop assistants, waiting staff, bar staff - that is again a whole lot of people. Maybe even policing too. The army? Why have 100,000 men in the army when you can have 1000 and 99,000 robots.

The hope is we can re-evaluate what our lives are for. We are supposed to work to live, not live to work. Imagine then a world where everyone gets a default minimum salary, say £20k each. If you choose not to work or cannot find work, you get to live on your default salary. That would be interesting wouldn’t it? Or perhaps re-evaluating the whole capitalist society! Exciting times.

7 Likes

This is such a meta conversation, it being a conversation about the fact that we are having conversations about AI. I agree with everything @athousandships said 100%. I really think the perception of “what AI can do” for business owners is much more dangerous, at least in the short term than what LLMs can actually replace.

I also personally think UBI (Universal Basic Income) that @pauldrewett alluded to would be a great thing, especially here in the US. Having UBI would solve a lot of our broken social programs in the US. UBI will become more popular an idea when unemployment skyrockets and it disrupts the bottom line enough to create financial or social unrest that threatens businesses’ solvency. Basically, corporations will start paying UBI taxes when it becomes the only way for them to stay in business like an ouroborus.

UBI I think will also solve the problem that so many artists are facing right now: How can I make a living with my art if businesses use AI to replace me? What we will end up with is bespoke, human-generated art becoming a premium. Especially when artists no longer have to create art to survive. There are a lot of accepted myths that good art can only come from pain and suffering. But reality doesn’t bear that out.

Vincent van Gogh is one of my favorite examples. Yes, he created fantastic impressionist art. Starry Night is my favorite painting of all time. (I highly recommend seeing it in person at the MOMA in NYC if you ever get a chance, because it is actually a 3D painting that no picture can capture - I know because I’ve tried.) He suffered from mental illness. He was financially supported by his brother who bought his paints for him. Van Gogh, like many artists of the renaissance period (he was not of that period) had a patron, resulting in individualized, localized UBI.

We are seeing this now with AI music. It’s good. Good enough that multiple AI (LLM-assisted) songs have become Billboard #1 hits and people can’t tell. What AI can do is mass-market appeal. What it can’t do is create something very specific. I’ve been playing with it for 6 months now, and it’s great as long as you know what a good song sounds like, and you’re not too picky about details - and you like playing with RNG until you get something “good enough”.

And that’s the things about LLMs - the danger isn’t “can replace a human” but, “good enough” that I can get along without a human doing it. Things that require accuracy will always be harder - but when the math works within statistical tolerances, it will replace accountants as long as it makes less mistakes than a human - not when it’s perfect.

But, to the point @athousandships made earlier, you still will need knowledgeable people making stuff with LLMs. Those chart-topping songs weren’t made with a one-word prompt. They were made by musicians who learned how to direct the AI to make something good - and no one heard all the AI slop that was generated to get there.

I made the Dragonforge Theme Song with a simple prompt. I literally needed a filler song that I had rights to legally release so I could add it to my Game Template and have a default example song. I didn’t really care what it sounded like as long as it wasn’t crap. An I got a pretty epic song out of it. Suno gave me two options, and I really liked the one I linked above.

When I started using Suno, it was impossible to make two songs on the same musical theme. Two months ago, in October, I made a game jam game called Skele-Tom with LLM-generated music on a theme. And I told everyone it was AI music, and then I got 70 people to review my game jam entry. Despite the AI music, people loved my game and music. People asked for the soundtrack, which I published. (It’s ridiculously killer IMO.) I got #12 overall and for the music category - despite people knowing it was AI-generated. You can read the comments here and note that many people hated that I used AI music, but still admitted it was really good, and that was frustrating too. And I agree - that is frustrating.

Disney just made a deal for 200 of their characters to be available in Sora. And the deal was a $1 Billion investment by Disney. They are getting ahead of things by jumping on the bandwagon for video content creation. And everyone is going to follow suit. This is going to have a huge impact on the creatives in the movie/TV industry.

Personally, I like the idea of not having to spend the rest of my life chasing the brass ring just to eat and have a place to sleep in my old age.

Also, I read about Cognitive Offloading yesterday and I wanted to pass this along because I think it’s interesting, and backs up my own theories on how learning by LLM is a bad idea.

5 Likes

“Tractor” is a general term. Saying AI can run a tractor is like saying that AI can operate a computer. The statement requires more specificity.

Driving a tractor is not the same AI as driving a taxi.
In fact, for many operations, a tractors primary function is not driving at all; that is secondary.
It is also the case that there are many different types of tractors and each has a unique set of operational circumstances.
If we are a decade/century away from AI reliably driving, then we are 10 decades/centuries away from AI reliably operating most tractor operations.

Imagine then a world where everyone gets a default minimum salary

The most likely amount would be $0. But I guess I can be a bit cynical.

1 Like

I grew up on a farm, and I’m having trouble understanding this statement. I mean I guess you could say that its primary purpose is hauling stuff… but it can’t do that if it doesn’t drive…

Fantastic! Then we are already there!

4 Likes

This is a tractor:

As is this:

And this:

And of course many of your farm implements.
The way to understand my point is this; if the tractor does nothing else but drive it is useless (towing machinery are obvious exceptions). This is not the case with a delivery van or a taxi.

That’s a tractor with a backhoe and loader/plow attached on the ends.

That’s a bulldozer. Specifically a Caterpillar D9.

That is known as an excavator or backhoe. Specifically a Caterpillar 335.

I do understand your point now. And my confusion. Your definition of a tractor is different than mine.

We will probably have to agree to disagree. Because I personally think that building a tractor that can cover specific acreage is a lot easier than navigating city streets with lights, other vehicles and pedestrians.

2 Likes