Are Large Language Models Really AI?

Eorzea Time
11:15 AM
 
 
 
Langues: JP EN FR DE
users online
Forum » Everything Else » Chatterbox » Are large language models really AI?
Are large language models really AI?
First Page 2 3 4
 Garuda.Chanti
Offline
Serveur: Garuda
Game: FFXI
user: Chanti
Posts: 11,661
By Garuda.Chanti 2025-04-22 20:08:57
Link | Citer | R
 
And if not what would or could be? (This assumes that we are intelligent.)

Sub questions:
1, Is self awareness needed for intelligence?
2, Is conciseness needed for intelligence?
3, Would creativity be possible without intelligence?
Feel free to ask more.

I say they aren't. To me they are search engines that have leveled up once or twice but haven't evolved.

They use so much electricity because they have to sift through darn near everything for each request. Intelligence at a minimum would prune search paths way better than LLMs do. Enough to reduce power consumption by several orders of magnitude.

After all if LLMs aren't truly AI then whatever is will suck way more power unless they evolve.

I don't think that LLM's hallucinations are disqualifying. After all I and many of my friends spent real money for hallucinations.
[+]
Offline
By K123 2025-04-22 20:40:31
Link | Citer | R
 
No, not really. Reasoning models are pushing close to "intelligence" though.
 Bahamut.Senaki
Offline
Serveur: Bahamut
Game: FFXI
user: Senaki
Posts: 190
By Bahamut.Senaki 2025-04-22 21:18:01
Link | Citer | R
 
I don’t think we’re there yet.

But rudimentary deep research utilization is inching us closer and closer to Skynet… Or anime waifus. Hopefully the latter.
[+]
 Asura.Saevel
Offline
Serveur: Asura
Game: FFXI
Posts: 10,079
By Asura.Saevel 2025-04-22 23:04:12
Link | Citer | R
 
Garuda.Chanti said: »
I say they aren't. To me they are search engines that have leveled up once or twice but haven't evolved.

They are not intelligent nor really "AI" anything, the only reason the whole AI thing got attached was that the first successful attempt at implementing those decades old algorithms was using the English language to do a Chat Bot. The result was so much better the previous chat bots that people actually what it was "aware".

We've known how to do that kind of node based search algorithm for decades, it was just so processing intensive to build the model that nobody thought it was practical. Then someone figured out how to do that using CUDA and suddenly the ridiculous vector processing capabilities of a GPU made it orders of magnitude more efficient.


LLM's and other versions of "AI" are just massive multidimensional arrays where each dimension represents a unique data point and the intersection of those data points represents the probability of that intersection existing. By stringing them along we can represent the probability of any sequence of data points happening. Then after we have a successful prediction, we can go in reverse and increment all previous intersections to reinforce the positive result. We can do the same with unsuccessful perditions and over time the array (aka the model) becomes more and more accurate.
Offline
Posts: 15,457
By Pantafernando 2025-04-23 02:08:02
Link | Citer | R
 
AI is an umbrella term to describe many related fields like machine learning, deep learning, so I dont see why not considering LLM as a subfield of AI too.
Offline
Posts: 15,457
By Pantafernando 2025-04-23 02:21:51
Link | Citer | R
 
Quote:
Sub questions:
1, Is self awareness needed for intelligence?
2, Is conciseness needed for intelligence?
3, Would creativity be possible without intelligence?

This is more philosophical than a cientific question IMO.

I think what distinguish humans from other animals is the large storage to retain memories.

This allows us to constantly retain data and retrain everything, slowly evolving to what we have today.

That is why dogs (for example) arent considered "intelligent", as they can barely remember things precisely to "train" their neural netwrok to make decisions above what behaviourism dictates.

So no, you dont need consciousness and self awareness for intelligence. IMO intelligence is the potential to break behaviourism, to be able to make more educated decisions over simply positive and negative feedback. Most animals already can make decisions basead in reactions. (interesting thoughts, probably our models already can perfectly replicate animals behaviour if they only works based on behaviourism).

Consciousness and self awareness are concepts humans created to alleviate their ignorance to their own existance.

Creativity is the capacity of making decisions way over behaviorism logic so yes, you need intelligence to be creative. You need to create something to progress from stage 0 to stage 1 and stage 2. If youre always living following behaviourism, by the end of day you are still on stage 0.

That is way you dont see animals progressively changing through human story (you are not seeing birds creating newspaper or glasses, for example, because they only react to the environment).
Offline
Posts: 15,457
By Pantafernando 2025-04-23 02:26:23
Link | Citer | R
 
And that also means that not every human is "intelligent".

Many are simply living the day. Barely making any plans, barely looking ahead, only working their money for the days food, drink and sex.

That is basically behaviourism with a new set of tools.

You could even ask if those humans who only live to work, ***, eat and sleep are even conscious or have self awareness. Probably if you are strictly enough, you would conclude that they are not.

That means, even within humanity, only a small fraction can actually push the evolution ahead.

The rest is only joining for the ride.
Offline
Posts: 911
By Tarage 2025-04-23 02:34:17
Link | Citer | R
 
Garuda.Chanti said: »
Are large language models really AI?

No. At best they are an intricate sieve. They spit out what you put in, but they don't do any thinking about what is being spit out.
Offline
By Afania 2025-04-23 04:49:15
Link | Citer | R
 
Pantafernando said: »
So no, you dont need consciousness and self awareness for intelligence. IMO intelligence is the potential to break behaviourism, to be able to make more educated decisions over simply positive and negative feedback.


But.... don't you need some self awareness for some kind of drive or desire to break behaviourism or environment reaction though?

For example: One's self awareness may lead to one person wanting to eat better food. Not just any food. This person set it as their personal goal in life, after they become aware of their own desire.

This person now has the drive to become a great chef that explores the potential of food. they used their creativity to create all kinds of great cuisine.

This person has now broke basic behaviourism of simply eating for survival, they have worked and created more value in the society due to their desire and drive.

I think this is one very important part that pushes human society forward: Your personal goal in life. which probably came from self-awareness imo.

Imo, having personal goal and drive beyond basic survival need is what makes human unique.
Offline
By Afania 2025-04-23 05:10:51
Link | Citer | R
 
Pantafernando said: »
Many are simply living the day. Barely making any plans, barely looking ahead, only working their money for the days food, drink and sex.

Pantafernando said: »
That means, even within humanity, only a small fraction can actually push the evolution ahead.


From my experience, most people has at least some desire or goal that they want to accomplish in life.

But how big of the impact that they can make in their life completely depends on their ability and competency on getting things done efficiently.

Some people just do things more efficient, faster and smarter. They leverage their resources better too.

Imo, having good ability to leverage resources is key to accomplish things more efficient than everyone else.

They make bigger impact that those who think a lot but don't do things efficiently.

To me, most people's problem is not the lack of self-awareness. But the lack of ability to accomplish their goal.

(Not saying people with zero goal in life don't exist, I just feel they are in the minority.)
Offline
By K123 2025-04-23 05:32:10
Link | Citer | R
 
Tarage said: »
they don't do any thinking about what is being spit out.
This is not true. Go and try a reasoning model like Deepseek R1 or Qwen 2.5 where it shows the reasoning and thinking it is doing (invisible on ChatGPT and mostly hidden on Grok).
Offline
Posts: 15,457
By Pantafernando 2025-04-23 08:36:13
Link | Citer | R
 
Just a personal perception, but I started using chatgpt around the end of last year, then I fall back because it was starting to feel like a worse search in google, because the answers were too generic and shallow.

As my current department is a child of AI major department, then I feel compelled to give it a new try recently.

I bought a book to teach programming with AI tools, and Im finishing it atm.

The result is that I gave a try to copilot using the new techniques learned, and I was honestly surprise.

It really help the cold start problem, that is a major pain point of mine.

Also its a workaround to things I dont want to really invest lots of time , like reviewing the outsourcing peoples code. They wait to add like hundreds of lines before submitting, and I dont really care about the nitty gritty details. Only the functional tests.

But copilot can give an overview of it and propose better reviews than what I was willing to give.

I will use more features of it soon so I think AI is in a pretty good place atm
 Fenrir.Niflheim
VIP
Offline
Serveur: Fenrir
Game: FFXI
user: Tesahade
Posts: 940
By Fenrir.Niflheim 2025-04-23 08:51:35
Link | Citer | R
 
K123 said: »
invisible on ChatGPT
they made the reasoning available on some chatGPT models a short bit after deepseek released.
Offline
By K123 2025-04-23 09:35:24
Link | Citer | R
 
Fenrir.Niflheim said: »
K123 said: »
invisible on ChatGPT
they made the reasoning available on some chatGPT models a short bit after deepseek released.
Forgot if it was ChatGPT or Grok that only shows a brief snippet as it happens which disappears as it reasons. Only Deepseek was showing full reasoning when I checked last though.
 Garuda.Chanti
Offline
Serveur: Garuda
Game: FFXI
user: Chanti
Posts: 11,661
By Garuda.Chanti 2025-04-23 09:41:27
Link | Citer | R
 
Pantafernando said: »
This is more philosophical than a cientific question IMO.
Exactly. Its also subjective.

Quote:
Creativity is the capacity of making decisions way over behaviorism logic so yes, you need intelligence to be creative. You need to create something to progress from stage 0 to stage 1 and stage 2. If youre always living following behaviourism, by the end of day you are still on stage 0.

That is way you dont see animals progressively changing through human story (you are not seeing birds creating newspaper or glasses, for example, because they only react to the environment).
In your viewpoint do new solutions to problems count? Dogs, crows, marine mammals, monkeys and even lab mice have all come up with innovative behavior including new solutions to research problems.

Octopuses outclass all the above examples.
Offline
By K123 2025-04-23 10:11:40
Link | Citer | R
 
Garuda.Chanti said: »
Pantafernando said: »
This is more philosophical than a cientific question IMO.
Exactly. Its also subjective.

Quote:
Creativity is the capacity of making decisions way over behaviorism logic so yes, you need intelligence to be creative. You need to create something to progress from stage 0 to stage 1 and stage 2. If youre always living following behaviourism, by the end of day you are still on stage 0.

That is way you dont see animals progressively changing through human story (you are not seeing birds creating newspaper or glasses, for example, because they only react to the environment).
In your viewpoint do new solutions to problems count? Dogs, crows, marine mammals, monkeys and even lab mice have all come up with innovative behavior including new solutions to research problems.

Octopuses outclass all the above examples.
The problem you're going to find is talking about "intelligence" as a single intelligence. I resisted the notion of multiple intelligences for years, but the advent of LLMs really prove the case. They're incredibly "intelligent" in some ways, but absolutely incapable of all forms of intelligence we would consider when assessing if someone is intelligent.

My oldest daughter (4) is extremely creative, a form of intelligence, but I doubt she will be as analytically intelligent as my son when she starts school - and he's not anywhere near as creative as her (2 years older).

To talk of intelligence as one measurable criteria is nonsensical - I have an IQ of 142 (top 0.5% or so) on Culture Fair test which is basically only pattern recognition. I also (partially due to being autistic) lack interpersonal and intrapersonal intelligence. If we want to use one number to explain IQ then I'm a genius, but when the widely accepted definition of intelligence is that of applying knowledge and understanding in new contexts for beneficial outcomes (problem solving, survival, achievement of goals, etc.) and I cannot understand my own mind or work with other people then I'm a retard. I can't be both a genius and a retard simultaneously so the theory of multiple intelligences (Gardner) answers that contradiction.
[+]
Offline
By Afania 2025-04-23 12:11:26
Link | Citer | R
 
Pantafernando said: »
The result is that I gave a try to copilot using the new techniques learned, and I was honestly surprise.

It really help the cold start problem, that is a major pain point of mine.

Also its a workaround to things I dont want to really invest lots of time , like reviewing the outsourcing peoples code. They wait to add like hundreds of lines before submitting, and I dont really care about the nitty gritty details. Only the functional tests.

But copilot can give an overview of it and propose better reviews than what I was willing to give.

I will use more features of it soon so I think AI is in a pretty good place atm

Told ya! AI is like muscles. The more you use it or practice using it the more you productivity you can get out of it. It's like compound interest at this point.

Afania said: »
I recently ditched ChatGPT and switched to MS copilot for work, gonna say the result and accuracy is great! Highly recommended.

All hail Microsoft, my lord and savior.

Repeat after me: copilot is awesome.... copilot is awesome....
[+]
Offline
Posts: 15,457
By Pantafernando 2025-04-23 12:31:31
Link | Citer | R
 
Garuda.Chanti said: »
In your viewpoint do new solutions to problems count? Dogs, crows, marine mammals, monkeys and even lab mice have all come up with innovative behavior including new solutions to research problems.

Octopuses outclass all the above examples.

If they are making decisions with higher long term profits despite acting with "loss" in the short term (ALA investment), then yes, they are probably being more creative than any other animal that is simply reacting.

Humans didnt need to develop so much culture and tools if they are simply worried to live another day.

They did it so long after they could reap benefits that made them trimph in the food chain.
Offline
Posts: 15,457
By Pantafernando 2025-04-23 12:31:47
Link | Citer | R
 
Afania said: »
Pantafernando said: »
The result is that I gave a try to copilot using the new techniques learned, and I was honestly surprise.

It really help the cold start problem, that is a major pain point of mine.

Also its a workaround to things I dont want to really invest lots of time , like reviewing the outsourcing peoples code. They wait to add like hundreds of lines before submitting, and I dont really care about the nitty gritty details. Only the functional tests.

But copilot can give an overview of it and propose better reviews than what I was willing to give.

I will use more features of it soon so I think AI is in a pretty good place atm

Told ya! AI is like muscles. The more you use it or practice using it the more you productivity you can get out of it. It's like compound interest at this point.

Afania said: »
I recently ditched ChatGPT and switched to MS copilot for work, gonna say the result and accuracy is great! Highly recommended.

All hail Microsoft, my lord and savior.

Repeat after me: copilot is awesome.... copilot is awesome....

Dont make me regret using copilot.
Offline
Posts: 15,457
By Pantafernando 2025-04-23 12:33:50
Link | Citer | R
 
Also because Ive seen benchmarks saying copilot is behind another one (claude, maybe?).

But somehow I have copilot both in home and office, im not sure if the free version, or some version tied to my subscriptions.

So thus why Im using it over other that seems better.
Offline
By Afania 2025-04-23 12:35:30
Link | Citer | R
 
Pantafernando said: »
Dont make me regret using copilot.

Why are you choosing an AI based on my preference! Don't let internet strangers cloud your judgement!
 Garuda.Chanti
Offline
Serveur: Garuda
Game: FFXI
user: Chanti
Posts: 11,661
By Garuda.Chanti 2025-04-23 12:40:16
Link | Citer | R
 
K123 said: »
To talk of intelligence as one measurable criteria is nonsensical - I have an IQ of 142 (top 0.5% or so) on Culture Fair test which is basically only pattern recognition. I also (partially due to being autistic) lack interpersonal and intrapersonal intelligence. If we want to use one number to explain IQ then I'm a genius, but when the widely accepted definition of intelligence is that of applying knowledge and understanding in new contexts for beneficial outcomes (problem solving, survival, achievement of goals, etc.) and I cannot understand my own mind or work with other people then I'm a retard. I can't be both a genius and a retard simultaneously so the theory of multiple intelligences (Gardner) answers that contradiction.
Total agreement.

Also the only thing IQ tests measure, or for that mater can measure, is your ability to take IQ tests. Which really has no no use in the real world nor is it predictive of any measure of success in the real world.

And by that measure we can be improbable combinations of traits such as your "both a genius and a retard simultaneously." I test well into the genius class but lack creativity which I personally consider one of the hallmarks of genius.
Offline
By Afania 2025-04-23 12:52:29
Link | Citer | R
 
Garuda.Chanti said: »
K123 said: »
To talk of intelligence as one measurable criteria is nonsensical - I have an IQ of 142 (top 0.5% or so) on Culture Fair test which is basically only pattern recognition. I also (partially due to being autistic) lack interpersonal and intrapersonal intelligence. If we want to use one number to explain IQ then I'm a genius, but when the widely accepted definition of intelligence is that of applying knowledge and understanding in new contexts for beneficial outcomes (problem solving, survival, achievement of goals, etc.) and I cannot understand my own mind or work with other people then I'm a retard. I can't be both a genius and a retard simultaneously so the theory of multiple intelligences (Gardner) answers that contradiction.
Total agreement.

Also the only thing IQ tests measure, or for that mater can measure, is your ability to take IQ tests. Which really has no no use in the real world nor is it predictive of any measure of success in the real world.

And by that measure we can be improbable combinations of traits such as your "both a genius and a retard simultaneously." I test well into the genius class but lack creativity which I personally consider one of the hallmarks of genius.

I got somewhere around 130-140 in IQ tests and still almost failed high school math and science =.= It doesn't even represent pure academic performance lol. It only test your ability to recognize visual patterns (which happened to be something that I am kinda good at apparently.)

Also from test data it seems like Chinese character influenced regions, mostly east Asia, gets tremendous advantage on IQ tests.
[+]
Offline
By K123 2025-04-23 13:25:10
Link | Citer | R
 
Garuda.Chanti said: »
K123 said: »
To talk of intelligence as one measurable criteria is nonsensical - I have an IQ of 142 (top 0.5% or so) on Culture Fair test which is basically only pattern recognition. I also (partially due to being autistic) lack interpersonal and intrapersonal intelligence. If we want to use one number to explain IQ then I'm a genius, but when the widely accepted definition of intelligence is that of applying knowledge and understanding in new contexts for beneficial outcomes (problem solving, survival, achievement of goals, etc.) and I cannot understand my own mind or work with other people then I'm a retard. I can't be both a genius and a retard simultaneously so the theory of multiple intelligences (Gardner) answers that contradiction.
Total agreement.

Also the only thing IQ tests measure, or for that mater can measure, is your ability to take IQ tests. Which really has no no use in the real world nor is it predictive of any measure of success in the real world.

And by that measure we can be improbable combinations of traits such as your "both a genius and a retard simultaneously." I test well into the genius class but lack creativity which I personally consider one of the hallmarks of genius.
I'm actually really creative too somehow, so +1 for analytical skills and creativity -1 intrapersonal and interpersonal
Offline
Posts: 911
By Tarage 2025-04-23 14:21:21
Link | Citer | R
 
K123 said: »
Tarage said: »
they don't do any thinking about what is being spit out.
This is not true. Go and try a reasoning model like Deepseek R1 or Qwen 2.5 where it shows the reasoning and thinking it is doing (invisible on ChatGPT and mostly hidden on Grok).
Thats like saying a pasta strainer "reasons" what is and isn't pasta. No it doesn't. It has no concept of what pasta even is.
Offline
By K123 2025-04-23 14:27:45
Link | Citer | R
 
Tarage said: »
K123 said: »
Tarage said: »
they don't do any thinking about what is being spit out.
This is not true. Go and try a reasoning model like Deepseek R1 or Qwen 2.5 where it shows the reasoning and thinking it is doing (invisible on ChatGPT and mostly hidden on Grok).
Thats like saying a pasta strainer "reasons" what is and isn't pasta. No it doesn't. It has no concept of what pasta even is.
wut

Not willing to go into epistomology or end up in a semantic or philosophical debate on FFXIAH about this... go try it and explain how it isn't reasoning any less than you or I might (context dependent). Ask it what would be the best material for x application, for example, and see the variables it evaluates to come to an informed decision. Of course this is different about asking it what two flavours combined might taste like, since it could never simulate such an experience in the same way.
Offline
By K123 2025-04-23 14:31:24
Link | Citer | R
 
ons from Oxford Languages · Learn more
thinking
/ˈθɪŋkɪŋ/

noun
the process of considering or reasoning about something.

adjective
using thought or rational judgement; intelligent.
Offline
Posts: 911
By Tarage 2025-04-23 14:41:33
Link | Citer | R
 
You are giving a glorified chatbot way too much credit. You clearly have no understanding of the underlying technologies involved. It is not thinking, and frankly neither are you.
[+]
Offline
By K123 2025-04-23 14:54:20
Link | Citer | R
 
I fully understand how LLM work, and how the human brain works. I'm not saying they're the same, but to say they're not reasoning or using rational judgment is illogical.
Offline
Posts: 911
By Tarage 2025-04-23 15:07:54
Link | Citer | R
 
K123 said: »
I fully understand how LLM work, and how the human brain works. I'm not saying they're the same, but to say they're not reasoning or using rational judgment is illogical.
You literally said if you asked it what two flavors combined would taste like, it would have no idea. You claimed it wouldn't know because it "couldn't simulate it". What do you think people do when asked that question? They "simulate it" in their heads. There's whole fields dedicated to the concept. Like the "visualize an apple rotating in your head" exercise.

The point is, when you ask a LLM what the best material for X is, it isn't simulating anything, it's using the dearth of information that was fed into it to get an answer that fits the filter it generated. A lot of times that answer is completely wrong and nonsensical. Why do you think delusions are such an issue with LLMs? It's a glorified sieve. It isn't using judgement because it HAS no judgement. Again, the very basis of how these things are generated is as brute force as it gets. You slam data into the thing until it gives something resembling an answer. That isn't thinking, that's rolling the dice on a sieve till it strains the pasta you want it to strain. Ask it for anything it wasn't brute force trained for and it shits the bed, even if whatever it's being asked is related to what it was trained on.

If you want to learn about ACTUAL AI, please go read up on neural networks and the amazing work being done in that field. We're able to simulate up to insect intelligence if I remember correctly. A chat bot isn't the way to true AI.
First Page 2 3 4