General Talk - Hardware , AI Burst

It’s hard to miss recent years of GPU hiking prices, and now from last November ram as well.

Most popular IDE implementation of LLM chat, design of OS which is more “visual” then “perfomance” orientiented seem to be trend also for Win and Apple .

Where does it heading ?
Will the average gamer be soon forced to rent/subscribe on their “low powered device” for cloud computing power from “Big Players” ?
Maybe we will see some more trend for replaying less demanding games :slight_smile:
Will AI ever burst ? - or is it only wish of people which hate those companies for taking people’s job, hiking prices of components ?

Wonder what other thinks , ( no this is not another topic about stop, over if use AI, its more general opinion oriented :slight_smile: )

Gotta ping the G.O.A.T @Demetrius_Dixon

If you run a company, will you ever fire an employee and replace them with AI?

I will NOT do it…

4 Likes

no, but corpo thinks different

2 Likes

:distorted_face:
LoL yeah…
Then let’s want for the next WannaCry ransomware attack.

because A.I is DUMB, I know they can make stuff, but they can’t finish it.

3 Likes

@iOSxcOder

Read this entire topic. You’ll be well informed afterward.

6 Likes

I think the AI bubble will burst eventually unless something bug changes but if it stays as it is it will burst. It’s not called a bubble for the heck of it. But it’s the same bug companies passing around the same money. It’s basically fake profit. The use of AI in companies, correct me if I’m wrong hasn’t shown too much profit. It will in my opinion burst if nothing big changes because most ai just isn’t profitable.

1 Like

I am gonna say it.

AI… well atleast the Generative AI such as LLMs and Diffusion models are heavily unoptimized. We’re literally using algorithms that were proposed back in the 50s and 70s.

The most craziest part… is how LLMs operate. They predict word by word. It’s so unoptimized and instead of actually trying to optimize it or trying to come up with different algorithms we’re just pouring down all the hardware…

What? LLM makes up random staff? LLM can’t have a context of the overall conversation longer than 4 messages? No problem! Let’s just pour MORE AND MORE HARDWARE RESOURCES.

Seriously man… what happened to the good ol’ idea of optimizing everything so that it would run even on the worst hardware?

Of course AI will burst. It’s so unoptimized and makes no sense.

They are never going to create AGI anytime soon. It’s literally just unoptimized large chat bots.

In order to create AGI a whole different architecture must be proposed consisting of self-awareness and [I deleted that]. Anthropic AI is the only real company that is going in the right direction of creating AGI. Their product “Claude”, specifically, the self-awareness part is really well made. It is able to view,read and edit any file on the filesystem. It’s literally how an organism in a computerized environment would be aware of its own environment.

(Yet, consider that self-awareness does not mean having a desire. You can be aware that there are files around you, but what exactly gonna make you to perform a certain action by your own desire?)

1 Like

IMHO the current bubble will burst, but that will not be the end of AI, but for some of the companies.
The genie is out of the bottle, the dotcom bubble hasn’t ended the internet.
The current time is where “we” can use it for free, to harvest billions of chat sessions to train the next models. The enshittification of the business will come, when the real cost has to be paid to use the tech, or with your data. Some will survive, like Alphabet or now OpenAI (with government support, your taxes), all companies who support the “system”.
The OP question was about hardware, that looks bad. I think the current PCs are the last for some time, back to cheap terminals to access the mainframe (like in the 70s), the data centers.
I have the impression, that there is no interest to sell more capable hardware to the consumer, than we can buy today, you must use the cloud. A lot of companies are now leasing hardware as “hardware as a service” to the consumer. A lot of the consumer hardware companies, who sell systems to consumer, will not survive this. The supply shortage for memory, SSDs, HDDs, GPUs etc. will be there for years. There is some stock, but that will be gone in short term. The factories are sold out till 2027/2028.
When the bubble bursts there will be some 2nd hand hardware available, maybe a lot.

3 Likes

No, it isn’t. You just might have been fooled by their marketing. They’re actually the epitome of everything wrong with “ai”. Your main complaint was about mindless scaling. Recently on Dwarkesh Patel podcast, Amodei basically admitted that their all bets are on “scaling the compute”.

Claude is not “aware” that there are files on your computer any more than a calculator is aware there are numbers in its registers. It’s all an illusion.

What genie? I hear this slogan repeated a lot. Almost like some PR strategist broadcasted it through all available media channels to plant it into minds of worldwide muggles. But what exactly is this genie? Neural nets? LLMs? Diffusion? Transformer architecture? Chatbots? Billions worth purchases of TPUs? Normalization of tacit massive scale data theft?

6 Likes

“Pandora’s box has been opened.”

All of them, “Machine learning” as a whole. It’s a new paradigm for problems where humans can’t find or develop an algorithm for. How would you write a gdscript to distinct between a dog and a cat in an image?
Imagegen and chatbots are gimmicks in the scope of the possibilities. IMHO the technology shouldn’t have been developed. It will not end well. That’s what I mean with “Genie is out of the bottle” and “Pandora’s box”.
The (surveillance) companies love that they can convert any unstructured and unlabeled data to meaningful profiles. With ALL of the data.

But these machine learning still relies on chain of basic operations.

Here I can provide some info as had a tinkering with PyTorch , TensorFlow and Apple’s ML and all of them shared one thing in common and that is what algorithm you choose for train and test , it’s not magical organic computer which could have independent response to presented signal .

3 Likes

I don’t think that’s the “genie”. Neural nets are in use for pattern recognition for decades. Genie/Pandora talk started only after the big players realized that shifting the pre-trained embedding via transformer architecture can produce convincing enough illusions of “ai understanding the context” in average person’s perception. And it can only do so if mountains of stolen data are backing it. The genie metaphor may just mean “Oops, we stole the data, we’ll continue to do it, and we’re not giving it back. So sue us. If you can.”

6 Likes

Well said .

This is essence of it all , “common folks” opinion doesn’t matter anymore to those in power, would even someone go protest ? Would this protest matter and change a thing ?

Just look on state of world around us and if broadband become unavailable what it cause ,or simple bank system downtime .

3 Likes

Yo popping back in lol. I agree that everyone has a point here to a certain extent. Claude in my opinion is the only AI that feels artificially intelligent in that sense. Got just kinda agrees with you and other AI feel a bit like talking to a 10 year old just one with good language. Having to explain stuff over and over etc. . That however doesn’t mean anthropic is a morally good company. I have many times on this forum voiced my opinions on AI but here isn’t the place for it. And yeah the bubble will likely burst but I agree it won’t be the end of AI. Ai is something I believe we will just have to live with form now on. But the insane AI scaling we see happening right now will burst.

1 Like

Hello! I believe you misunderstood something or it I said it in a wrong way.

I do not mean that Claude is aware on its own. I mean that he has an implementation that let’s to read/write/create/delete files using an LLM agent. Now consider this, what if it would be used not for LLM but for… model with absolutely different architecture consisting of multiple neural networks that would have consciousness?

That implementation I would say is quite complicated. They managed to make it in such a way that Claude is able to understand the image files,video files (a little bit) and text files. This fits absolutely perfectly as an additional thing for self-awareness for future model consisting of multiple neural networks for consciousness.

You should learn a bit more a bout how LLMs work. I think using anthropomorphizing language in context of LLMs is deeply misleading. And I noticed that only two categories of people do that - ones that don’t know how they work and ones that sell them. A match made in heaven? :rofl:

You use words like “consciousness”, “self-awareness”, “understand”… Those are human mental processes that are falsely ascribed to a piece of software merely on the basis of its textual output. To us humans, the output might looks like a result of such processes but that doesn’t mean that it is. The fact is - it isn’t. There is no mind simulation running anything resembling such processes. It’ just an illusion created by statistical analysis of enormous amounts of language data.

Understanding how LLMs work is hard because it involves visualizing multi-dimensional linear algebra operations. When they say a model has millions of parameters, it means that it does calculations with million-dimensional vectors in million-dimensional space. Compare that with how much trouble we have merely visualizing 4-dimensional space.

5 Likes

I think I know how LLMs work.

They’re just predicting which next token (vectorized word in more broad terms) after previous token using probability theory and etc.

And that all words are actually vectors and that there is a bunch of transformation matrixes that turn user’s text input into something LLM would understand and etc.

What I was trying to say that in order to create AI with “consciousness” it requires ABSOLUTELY different architecture. LLM can’t be conscious. It’s literally unoptimized chat bot (considering that all LLMs we use are regressive…).

Correct, me If I am wrong again.

Have you tried or some local LLM via ollam or such ? Try change a seed

Personally, yeah, the AI bubble will eventually burst, seeing how artificial “intelligence” has developed since they released chat GPT… back then it had a small amount of data to work with, but was barely capable enough for giving advice on how to eat a banana, since then, AI is requiring more and more data centers across the globe, which requires more and more resources–more than even the big companies can afford. Yes, they really should be optimizing things, and unless they do what happened to NES programs, going from Super Mario Bros, to Mario 3 (in terms of memory and data optimization), AI will never be able to sustain itself.

Yeah but there’s a reason why I prefix everything with unless something big changes. I agree the AI bubble will likely burst and I agree with everything you said but the need for more ram vram etc. might see companies developing new tech at accelerated rates.