Is the AI Bubble about to burst?

I think that all the knowledge should be fully free and accessible to anyone, at least online since we have such a powerful tool as the internet, everyone should be able to access all knowledge and so learn on their own, and then if wanted do exams to legally prove they know about the subject for a career.

A lot like learning programming or cybersecurity, which can be done mostly online luckily, where you can do certifications and things like that to help you pursue a career path, but for all types of information.

The subject of academic publishing is another deep dive by itself. And another not less criticizable. It is one of those economic fields that is so blatantly bad for humanity, still thrived so well since the 1960s. I had a talk about Elsevier/RELX (one of the biggest scientific publishers on earth). Just to get you started a little simplification:

  1. Researchers working in academia, so, the people generating the bulk of new knowledge (before some company takes it and makes it a product) do the main work. They write the papers with the corresponding scientific infrastructure (labs etc.) as their paid job. Paid by taxes, for the most part.
  2. Their work needs to be published. Needs, because the researcher’s “market value” is measured by indexes. H-Index being one prominent one. It is calculated by published papers and citations of those.
  3. To get published, a paper needs to go through peer-review. You pick a journal to publish your work, get an editor assigned that works for this journal. The editor then will ask YOU to name a few people in the field that could review your paper. Some also find people on their own, but this is not common in specialized fields.
  4. Those peers reviewing your work are not paid. They do this on their free time - they are usually researchers somewhere around the globe at another university or institute. They tell the editor and you, if they think your work is solid or what needs to change. Based on their comments the editor decides to publish your paper or not. There is a saying: “publish or perish”.
  5. If you get published, you give the journal the full copyright (or at least all reproduction rights in countries where full transfer of rights is legally prohibited). You do not get paid, too. And you are not allowed to distribute the published copy of your work (I’ll come back to this later). As far as the journal is concerned, you are done here. Get back to work and hope you get cited.
  6. The journal now sells the issue with your work in it through their network. They offer online resources and subscriptions for universities, institutions, companies and libraries. They cost a lot. And they raise prices constantly.
  7. But they are the central source of knowledge now. And this knowledge is needed and valuable, because other people need it to do more and ever better research. So all those people and institutions are forced to subscribe to the expensive journals to get access to that knowledge.
  8. RELX, the parent company of Elsevier disclosed their revenue of 3.1 billion British Pounds. That is 33% of their overall revenue of around 9.000.000.000 Pounds. They do have a few other markets (like events) but Elsevier’s margin is believed to be even more that 33% - Wikipedia lists around 2.000.000.000 Pounds for 2022 - just for the academic publishing!

If researchers want their work publicly available (called “open access”) even though it is published, there are two ways:

  1. Journals do offer the option for a buy-out. The researcher or the institution they are working for has to pay the journal 1000 to 3000 € per paper. Then they can make it available for free online.
  2. Researchers are allowed to publish their work in its “submitted” form, meaning not the journal version, but the one they send there without the journal’s formatting. ArXiv is popular for this. But to cite them properly, you would still have to cite the journal they were published in, or else their H-Index does not count it.
  3. (bonus) If you know which paper you want and it is locked behind a pay wall of a publisher, you can always get in touch with the researcher directly. They are usually allowed to send you the published version of their work, afaik.

Last fun fact: One of the pioneers of modern, exploitative academic publishing was Robert Maxwell - the father of Ghislaine Maxwell.

4 Likes

Wow. That is interesting to hear!

I believe that academic publishing does have major problems. (Thanks to Sabine Hossenfelder) But you know, what doesn’t in our corrupt and greedy capitalist society. Look deeper into anything and the picture is awful. Just look at farm subsidies, carbon taxes, megachurch tax exemptions, tax havens, US politics (or most places for that matter) and the list just goes on and on.

Then lets look at the great things too I suppose, medicine, open source initiatives, space telescopes, the internet, computing power, smart phones …

Now which list does AI properly belong to? As a technology, the latter, the good one, I think.

Still on the fence. And I use AI every single day!

(And as part of the ‘BUY NOTHING REVOLUTION’ I have never paid a penny for it!)

3 Likes

I didn’t know that the process of publishing research was this complex, that’s interesting.

Side note: it’s interesting how we moved from talking about the economic situation of the AI market to academic publishing.

2 Likes

This thread was about Nintendo at one point.

3 Likes

Oh yeah I had forgot about that lol.

1 Like

2 Likes

I’m sorry what exactly would the purpose of that screenshot be?

I thought it was funny. That’s it.

2 Likes

Oh, I thought you were taking that post seriously.

AI is here to stay, bubble or not. What comes after the bubble bursts scares me.

Firstly because I don’t really want to lose my job to it. (scratch that, career to it)

And secondly because it can and will be used by the big corps and governments (which are pretty much the same thing at this point anyway) to make up all kinds of crap that isnt true.

Honestly hope it crashes to such an extent that people just accept its not worth investing any real time or money into it, outside of using it for science for example.

Unfortunately we are in the crap timeline, so its all going to go to shi* in quite a horrible way very soon.

2 Likes

If the burst really mirrors the dot-com crash as some people here think then we WOULD be safe, at least on the part that AI wouldn’t take our jobs (which it won’t do for all jobs as explained on the other topic) but we don’t know if we (or at least the USA and maybe China) would recover from the crash financially, as TRILLIONS have been spent on AI and data centers.

Well now that we have LLM-created posts in the thread we’ve really come full circle.

EDIT: For context, there was previously an AI-generated post above this this one that was overly complimentary about how we were right. The account introduced themselves with a hyperlinked name that led to what appeared to be a sketchy URL. It also had an AI-generated profile picture that was anatomically impossible.

3 Likes

I’ve already flagged it.

4 Likes

Samesies.

3 Likes

I wanted to actually add something else valuable to this discussion as well: AI as emotional support for people. Nobody has brought this up yet.

I think emotional support AI is one of the worst psychological traps people can EVER fall into. It’s just as bad, if not worse than social media and doom scrolling,

This is coming from myself, who has fallen for the trap for about a year.

Now, I very much understand use cases for AI as emotional support. Therapy is prohibitively expensive, friends are hard to come by nowadays, and most family members suck at emotional support and/or make it worse.

Also, AI is open 24/7/365, can mold around most niche topics that are hard to talk about, won’t use the information you feed it against you in real-life, and won’t complain or get emotional because it’s an algorithm and has so life no of its own.

All this combined creates a risk-free “social” environment.

However, with no social risk, there is no social reward. The reward is close bonds with other humans (That sounds so dystopian, but it’s the current state of affairs) and opportunities for real growth.

In my opinion, talking to an LLM as your only social outlet will hinder your growth as a person. As you won’t be incentivized to try and talk to people.

I know speaking up, especially in real-life, is easier said than done (being autistic doesn’t help), but it’s 100% worth it!

I’ve learned that the only real risk to socializing is rejection. “Rejection” is a very broad term, but I define it as when someone else doesn’t want to interact with you for their own reason.

That’s a BIG deterrent for me and many, many others. But you gotta take the risk. You either connect or learn how to connect. AI hinders your ability to take those necessary risks.

5 Likes

Agreed. Although people who have started religions based on AI/LLMs. I read an article about one, and went to google it to link it (can’t find it), and I found there are multiple out there. Worshipping AI might be worse, but then I think those people may fall into the same category of seeking emotional support and community from LLMs.

That is actually a comforting untruth. I don’t have any links, but I’ve read about how companies are using those chats against you. Also, at some point governments are going to start asking for those records, and since LLMs are trained on data, everything you ever say to a chatbot is likely going to be available forever.

Nothing ever disappears on the internet. And unless you’re using a local chatbot (which let’s face it 99% of the people using them are not) you’re using the internet when you use an LLM.

Amen. And there have already been studies on this, so it’s not just your opinion.

Rejection is no fun. Especially if one gets their personal validation from what others think of them. I definitely fell into that trap for many years. Lots of therapy helps.

4 Likes

I agree really much.

Another issue with AI it’s that it obviously cannot comprehend the issues itself, especially issues related to socialising with other people.

(Side note, I thought the topic had died out because I didn’t get notified for the emails about this new responsee lol)

2 Likes

I think it’s kinda different.

Worshipping means that you think something is above you, it means that it’s sacred and what it says it’s sacred and important in any situation, even if you might even think otherwise.

While asking for advice to an LLM is just kinda having a program give you a logical response to a question or situation based on data.

Venting to it (not referring to listening to its responses, which is still different from worshipping) is kinda like writing a diary though, as you’re writing out what you feel, the only difference is that a program responds to you.

(Might be biased because I don’t wanna fall in the same category as an AI worshipper)

1 Like

I think you misinterpreted what I was talking about. I was being literal, not metaphoric.

3 Likes