|
Are large language models really AI?
By K123 2025-04-23 15:11:07
If you want to learn about ACTUAL AI, please go read up on neural networks and the amazing work being done in that field. We're able to simulate up to insect intelligence if I remember correctly. A chat bot isn't the way to true AI. LLM are a form of neural network. Seems like you need to learn the basics.
By Tarage 2025-04-23 15:16:14
I will give you a concrete example of how LLMs aren't thinking. I'm a programmer, I write code for a living. I had to write some code for some platforms I was unfamiliar with, Linux and iOS. I asked one of these LLMs for some code to get the graphics card names on those platforms and the code it spit out was somewhat functional. However, it used a method I did not want to support, so I asked it to rewrite the code without that method. Instead of writing a new function that would suit my needs, it simply deleted the line in question and spat back the same function, now even less functional than before. Why? Because it was grabbing the majority of that function from websites it was crawling to get that information.
This is basically the concept behind LLMs. This isn't thinking, this is brute force:
By Tarage 2025-04-23 15:17:42
If you want to learn about ACTUAL AI, please go read up on neural networks and the amazing work being done in that field. We're able to simulate up to insect intelligence if I remember correctly. A chat bot isn't the way to true AI. LLM are a form of neural network. Seems like you need to learn the basics. I'll admit I misspoke there, but the rest of my post, and the followup, stands. Refute that or admit you don't know what you're talking about.
By K123 2025-04-23 15:21:23
I'm not going to waste the time. Happy to hear about how NN are "amazing" but LLM (a form of NN) are useless though.
Let's not get another thread ruined where everyone says I'm wrong then literally every tech leader and IT expert in the world comes out saying exactly everything I was saying a few months later.
By Tarage 2025-04-23 15:25:30
I'm not going to waste the time. Happy to hear about how NN are "amazing" but LLM (a form of NN) are useless though.
Let's not get another thread ruined where everyone says I'm wrong then literally every tech leader and IT expert in the world comes out saying exactly everything I was saying a few months later. Right, well, thanks for admitting you don't actually have an argument, don't understand what you are arguing about, and will use a single mistake to go "I WIN U LOSE" and run away like a 5 year old. At least you saved both of us, and whoever reads this, the time that would have been wasted at your nonsensical flailing.
By K123 2025-04-23 15:29:27
I'm not going to waste the time. Happy to hear about how NN are "amazing" but LLM (a form of NN) are useless though.
Let's not get another thread ruined where everyone says I'm wrong then literally every tech leader and IT expert in the world comes out saying exactly everything I was saying a few months later. Right, well, thanks for admitting you don't actually have an argument, don't understand what you are arguing about, and will use a single mistake to go "I WIN U LOSE" and run away like a 5 year old. At least you saved both of us, and whoever reads this, the time that would have been wasted at your nonsensical flailing. Tell me more about amazing NN and useless NN LLMs
By Tarage 2025-04-23 15:30:45
I'm not going to waste the time. Happy to hear about how NN are "amazing" but LLM (a form of NN) are useless though.
Let's not get another thread ruined where everyone says I'm wrong then literally every tech leader and IT expert in the world comes out saying exactly everything I was saying a few months later. Right, well, thanks for admitting you don't actually have an argument, don't understand what you are arguing about, and will use a single mistake to go "I WIN U LOSE" and run away like a 5 year old. At least you saved both of us, and whoever reads this, the time that would have been wasted at your nonsensical flailing. Tell me more about amazing NN and useless NN LLMs How many times do I need to say "I misspoke" before you understand that? You really are a 5 year old. Is that all you can say?
By K123 2025-04-23 15:32:16
I'm not going to waste the time. Happy to hear about how NN are "amazing" but LLM (a form of NN) are useless though.
Let's not get another thread ruined where everyone says I'm wrong then literally every tech leader and IT expert in the world comes out saying exactly everything I was saying a few months later. Right, well, thanks for admitting you don't actually have an argument, don't understand what you are arguing about, and will use a single mistake to go "I WIN U LOSE" and run away like a 5 year old. At least you saved both of us, and whoever reads this, the time that would have been wasted at your nonsensical flailing. Tell me more about amazing NN and useless NN LLMs How many times do I need to say "I misspoke" before you understand that? You really are a 5 year old. Is that all you can say? GPT 2 is smarter than you. Jog on Mr. "I write a bit of code so think I'm an expert in human and machine logic and reasoning".
By Tarage 2025-04-23 15:33:55
I'm not going to waste the time. Happy to hear about how NN are "amazing" but LLM (a form of NN) are useless though.
Let's not get another thread ruined where everyone says I'm wrong then literally every tech leader and IT expert in the world comes out saying exactly everything I was saying a few months later. Right, well, thanks for admitting you don't actually have an argument, don't understand what you are arguing about, and will use a single mistake to go "I WIN U LOSE" and run away like a 5 year old. At least you saved both of us, and whoever reads this, the time that would have been wasted at your nonsensical flailing. Tell me more about amazing NN and useless NN LLMs How many times do I need to say "I misspoke" before you understand that? You really are a 5 year old. Is that all you can say? GPT 2 is smarter than you. Jog on Mr. "I write a bit of code so think I'm an expert in human and machine logic and reasoning". I thought you were done. Christ you're mentally handicapped.
By K123 2025-04-23 15:34:19
I'm not going to waste the time. Happy to hear about how NN are "amazing" but LLM (a form of NN) are useless though.
Let's not get another thread ruined where everyone says I'm wrong then literally every tech leader and IT expert in the world comes out saying exactly everything I was saying a few months later. Right, well, thanks for admitting you don't actually have an argument, don't understand what you are arguing about, and will use a single mistake to go "I WIN U LOSE" and run away like a 5 year old. At least you saved both of us, and whoever reads this, the time that would have been wasted at your nonsensical flailing. Tell me more about amazing NN and useless NN LLMs How many times do I need to say "I misspoke" before you understand that? You really are a 5 year old. Is that all you can say? GPT 2 is smarter than you. Jog on Mr. "I write a bit of code so think I'm an expert in human and machine logic and reasoning". I thought you were done. Christ you're HELP I AM TRAPPED IN 2006 PLEASE SEND A TIME MACHINE. Bravo.
Serveur: Asura
Game: FFXI
Posts: 985
By Asura.Iamaman 2025-04-23 17:41:02
I asked one of these LLMs for some code to get the graphics card names on those platforms and the code it spit out was somewhat functional. However, it used a method I did not want to support, so I asked it to rewrite the code without that method. Instead of writing a new function that would suit my needs, it simply deleted the line in question and spat back the same function, now even less functional than before. Why? Because it was grabbing the majority of that function from websites it was crawling to get that information.
I have a ton of examples like this. I was looking at some open source code the other day and figured what the hell (I was gonna look anyway and figured I'd try), I'll ask Copilot about it. I asked "does calling <xyz> result in <abc> hook being called?". It went on this long multi-paragraph answer about how it did and how what I wanted to do was possible via that hook.
I then went and looked at the sources and nowhere is <abc> referenced (it was a callback in a C library but you get the idea). I told it "I don't see where the callback is referenced" and it said "Oh yes, you are right, it isn't, that won't work, and what you are doing would be very difficult", then gave me some roundabout, impossible to maintain answer that basically boiled down to "read the code and rewrite it yourself"
I think I mentioned this one before, but I was poking at it a bit when looking how a large code base handled a certain bit of functionality. Basically there was a function using a much less efficient, wasteful mechanism for handling a data structure, unnecessarily allocating and copying it when it should just be able to increment the refcounter and use the original structure. The comment in the code explains why this is done abstractly, but the code isn't clear and it involves multiple FOSS code repos. So I asked it: "Why does blahblah() copy <data struct> instead of incrementing the refcounter?" - it gave me a paraphrased answer of the comment in the code. I then asked it more specifically: "What part of it requires this and why?" - only it told me nothing required it and blahblah() could increment the refcount instead of copying it, directly contradicting itself (and massively incorrect). Finally, I asked it to write example code for blahblah() but instead it did something entirely different and blatantly wrong. When I tried to correct it, it said "yes you are right, go read the code yourself" sortof thing.
OTOH sometimes I ask it to do something dumb, like write a bash script to parse a string, and it does it perfectly. So sometimes it saves me the effort of doing ***I just don't wanna deal with, but when it comes to anything remotely complicated - it's useless unless it's been trained on data that gives it a direct answer.
The problem is people think it can do more than it can. There are companies building businesses around LLMs doing ***they absolutely cannot do, then selling their shitty work at a fraction of the cost of what it takes to do the real work. This is the real AI apocalypse - the race to the bottom will mean it's harder for people to win business who actually do things correctly because 95% of society is too dumb to realize the limitations of AI and society will be upended by the equivalent of a knowitall intern.
Another fun one I found out this week. A friend owns a business. The Google AI somehow invented an address for them in another city/state - it combined two unrelated business addresses for other businesses, then spits it out when you ask their mailing address. The address it invented doesn't exist in the city it decided they are in nor in the city they are actually in.
[+]
By Tarage 2025-04-23 19:11:05
I asked one of these LLMs for some code to get the graphics card names on those platforms and the code it spit out was somewhat functional. However, it used a method I did not want to support, so I asked it to rewrite the code without that method. Instead of writing a new function that would suit my needs, it simply deleted the line in question and spat back the same function, now even less functional than before. Why? Because it was grabbing the majority of that function from websites it was crawling to get that information.
I have a ton of examples like this. I was looking at some open source code the other day and figured what the hell (I was gonna look anyway and figured I'd try), I'll ask Copilot about it. I asked "does calling <xyz> result in <abc> hook being called?". It went on this long multi-paragraph answer about how it did and how what I wanted to do was possible via that hook.
I then went and looked at the sources and nowhere is <abc> referenced (it was a callback in a C library but you get the idea). I told it "I don't see where the callback is referenced" and it said "Oh yes, you are right, it isn't, that won't work, and what you are doing would be very difficult", then gave me some roundabout, impossible to maintain answer that basically boiled down to "read the code and rewrite it yourself"
I think I mentioned this one before, but I was poking at it a bit when looking how a large code base handled a certain bit of functionality. Basically there was a function using a much less efficient, wasteful mechanism for handling a data structure, unnecessarily allocating and copying it when it should just be able to increment the refcounter and use the original structure. The comment in the code explains why this is done abstractly, but the code isn't clear and it involves multiple FOSS code repos. So I asked it: "Why does blahblah() copy <data struct> instead of incrementing the refcounter?" - it gave me a paraphrased answer of the comment in the code. I then asked it more specifically: "What part of it requires this and why?" - only it told me nothing required it and blahblah() could increment the refcount instead of copying it, directly contradicting itself (and massively incorrect). Finally, I asked it to write example code for blahblah() but instead it did something entirely different and blatantly wrong. When I tried to correct it, it said "yes you are right, go read the code yourself" sortof thing.
OTOH sometimes I ask it to do something dumb, like write a bash script to parse a string, and it does it perfectly. So sometimes it saves me the effort of doing ***I just don't wanna deal with, but when it comes to anything remotely complicated - it's useless unless it's been trained on data that gives it a direct answer.
The problem is people think it can do more than it can. There are companies building businesses around LLMs doing ***they absolutely cannot do, then selling their shitty work at a fraction of the cost of what it takes to do the real work. This is the real AI apocalypse - the race to the bottom will mean it's harder for people to win business who actually do things correctly because 95% of society is too dumb to realize the limitations of AI and society will be upended by the equivalent of a knowitall intern.
Another fun one I found out this week. A friend owns a business. The Google AI somehow invented an address for them in another city/state - it combined two unrelated business addresses for other businesses, then spits it out when you ask their mailing address. The address it invented doesn't exist in the city it decided they are in nor in the city they are actually in. Oh for sure I am not at all saying LLMs don't have uses. They just aren't intelligent AI. We aren't there yet.
By K123 2025-04-23 19:23:12
Most humans aren't intelligent. About 10% of the world can't even read.
[+]
By Tarage 2025-04-23 19:26:26
Most humans aren't intelligent. About 10% of the world can't even read. Knowing how to read has nothing to do with intelligence. You're confusing intelligence with knowledge. But I guess you do fit into the non-intelligent category, so it's moot debating you.
[+]
By K123 2025-04-23 19:30:27
Most humans aren't intelligent. About 10% of the world can't even read. Knowing how to read has nothing to do with intelligence. You're confusing intelligence with knowledge. But I guess you do fit into the non-intelligent category, so it's moot debating you. I'm more intelligent than you by every possible metric.
By Aldraii 2025-04-23 19:55:12
Most humans aren't intelligent. About 10% of the world can't even read. Knowing how to read has nothing to do with intelligence. You're confusing intelligence with knowledge. But I guess you do fit into the non-intelligent category, so it's moot debating you. I'm more intelligent than you by every possible metric.
Look out it's Internet Tough Guy. You don't want to mess with him.
By K123 2025-04-23 20:07:17
You can't do most IQ tests unless you can read well. Of course Tarage has never actually done one because he knows he's dumb.
[+]
By Tarage 2025-04-24 00:21:59
Most humans aren't intelligent. About 10% of the world can't even read. Knowing how to read has nothing to do with intelligence. You're confusing intelligence with knowledge. But I guess you do fit into the non-intelligent category, so it's moot debating you. I'm more intelligent than you by every possible metric. It's cute that you think that.
By Afania 2025-04-24 01:07:46
the race to the bottom will mean it's harder for people to win business who actually do things correctly because 95% of society is too dumb to realize the limitations of AI and society will be upended by the equivalent of a knowitall intern.
I agree that AI isn't going to replace people with senior level of professional knowledge. But I don't think it'll become race to the bottom in terms of business.
I've been doing business case studies on AI use and there are cases of business seeing revenue increase after they implement AI. It depends on how you use it.
If the "correct way" that you mentioned does increase revenue, then this business won't lose in the competition, period. Because the "correct way" is the way that generates more profit. And market will decide who is correct and who is not.
By K123 2025-04-24 04:12:17
There's only so much stuff you can sell in the world and only do much you can make, etc. People will lose jobs at entry level first which will have knock on effects. Talent pipelines will dry up, pushing for more use of AI to fill the gaps.
By RadialArcana 2025-04-24 04:23:29
Most humans aren't intelligent. About 10% of the world can't even read. Knowing how to read has nothing to do with intelligence. You're confusing intelligence with knowledge. But I guess you do fit into the non-intelligent category, so it's moot debating you.
Intelligence is the brain power to be able to learn and function, if someone is intelligent they will learn to read and do everything else they need because it will give them an advantage in society.
Most people in the world are barely functional morons (including first world nations now), no matter how much money, privilege or education you give them they cannot be more than they are because they are hard capped at the mongo level.
I know progressives have this laughable idea that all people everywhere have the same potential (like they came off an apple assembly line or something), and only money and education make the difference. In reality it's mostly all genetic and some people are significantly more intelligent than others and their children are more likely to be so too. If two morons have kids, their kids are more than likely going to be morons too (even if they win the lottery and put their kids into the best education).
In the past, even morons had worth in society cause there were always tedious worthless jobs that they could do. As we push further into AI and automation more, these people will have no value to society at all.
This is when western nations will need to embrace UBI (so these millions of morons don't all rise up and burn everything), and then organized anti natal mass propaganda to stop these useless people (factually useless) having children going forward.
The big problem with AI isn't just that morons will lose value cause that was going to happen regardless, it's that people with previously highly respected abilities will also lose most of their financial value too (musicians, artists, writers etc)
By K123 2025-04-24 04:57:29
While this is true it isn't the full picture because people born into money succeed over more intelligent people which aren't. I've seen this at play heavily at the university I teach at now where they're mostly from very wealthy backgrounds. Relatives will get them internships and/or they can afford to do one unpaid because mummy and daddy will fund them, or they're already from London or have a relative they can stay with in London to do their placements there. I've heard of students paying £200+ in train fares and hotels just to go to interviews that they're not certain to get. They can afford to gamble their money where students that can't afford to have had to pull out of interviews that were in person with no reimbursement. Money is more important than intelligence mostly.
By RadialArcana 2025-04-24 06:53:05
One of the problems in the west in modern times we have created lots of ways that make lots of money, that are not linked to intelligence. Sports, acting etc, and these people want to put their moronic kids into highly respected universities. This linked with colleges being for profit organizations, means they have a financial incentive to get these people with highly unimpressive kids to put them into their universities.
Lots of students in the top universities are utter morons right now, and since many of these kids are really never going to get much educational value from being there it has turned more into value of social networking (aka, potentially making friends who may be the next zuck)
Even though rich parents can give their kids an advantage, they are still locked to their genetics though. They are still gonna be tards, tards with impressive degrees on the wall (that the teachers are pressured to pass out, so they don't get called racist or sexist for having differing pass rates).
That won't be the case much longer though, we are not far off from genetic manipulation of human embryos to boost intelligence and many other things. So the rich will have designer babies in secret, maybe the mega rich already are! Even if we decide not to do this on ethics grounds, all it will take is one country going all in on it (for example China) and creating vastly superior babies and every other nation will be forced to also do it or fall far behind.
By Afania 2025-04-24 07:00:41
So the rich will have designer babies in secret, maybe the mega rich already are!
Radial, I think you read too much sci-fi fictions >.>
Serveur: Asura
Game: FFXI
Posts: 985
By Asura.Iamaman 2025-04-24 07:43:06
I agree that AI isn't going to replace people with senior level of professional knowledge. But I don't think it'll become race to the bottom in terms of business.
I've been doing business case studies on AI use and there are cases of business seeing revenue increase after they implement AI. It depends on how you use it.
If the "correct way" that you mentioned does increase revenue, then this business won't lose in the competition, period. Because the "correct way" is the way that generates more profit. And market will decide who is correct and who is not.
I'm not going to get into a long drawn out discussion on this, but I live in and operate a business that this is the case. It's not just me either, most of my peers are seeing the same thing and we have been for at least a decade, but the lack of understanding by people purchasing certain types of work about what LLMs can and can't do has driven it significantly higher the past few years. You can do case studies all day long, but I can tell you in the real world of many industries - this is absolutely the case. Revenue does not equate to good results, it just means they made money off someone or reduced their costs.
I stated before I work in the cybersecurity industry and this is a major problem in our industry. The problem is compliance standards only state that <x> work gets done, not to which extent it is actually done. Some people care about the quality of results, some people don't, most aren't technical enough to understand the difference between a person doing work and "AI". We run into this situation constantly: we bid on work for a potential customer at $x. We do things mostly manually except where tooling helps, we do a lot of detailed code review, etc. This process doesn't scale well but we almost always find issues people have missed for years because they depend on tooling run by jr employees. We have a competitor coming in claiming to use AI and charging 5% of what we are to do "the same" work, almost word for word the same claims. They are claiming to do this on an embedded device - which is lol for so many reasons. They make all the same claims we do, the difference is they are lying - both about the results and the use of AI. I know because I know the "AI" tools they use, I know they generate terrible results, and I know the people doing the work - they all operate to scale as high as possible and offer subpar results. We once had someone (who we had a long relationship with and actually cared) pay us for 4 weeks of work to decipher the results and identify false positives, among hundreds of "results" only 1 was valid. In the end, the customer sees two companies offering two things, one substantially cheaper, so they reduce budget and go with that even though work quality is lower. They get shitty results, but most people don't care. Hence "race to the bottom".
This is also evident in hiring and employment decisions made across a multitude of industries where high level managers think AI does things it can't.
I'm not saying it's useless. Just yesterday I fed a shitload of data into an elastic instance and asked Copilot to write a script to query it, which probably saved me at least an hour of work. But then I ask it simple questions about code it knows about and it drools all over itself. It can, at times, identify low hanging fruit and save us the effort - which is nice, but people equate that with having done the job and it hasn't. It just made probably a few hours of initial work into a few minutes of verification, leaving the more time consuming parts to still be done manually. Most of the tools in my industry claiming to use AI actually aren't and it's obvious when you can trick them by entering simple phrases in and getting certain types of results. This has been going on before ChatGPT and LLMs took off, but became more compounded when those did b/c it gave some legitimacy to people claiming to use AI - even though they aren't.
So yes, this is happening. It may not be across all industries, but it is occurring and it's due to perception, misunderstanding, and aggressive marketing. The day ChatGPT popped up became a golden ticket to marketing.
[+]
By RadialArcana 2025-04-24 07:46:54
By K123 2025-04-24 08:10:24
Changing a snippet of genes to ones which offer more HIV resistance is a world away from making intelligence through genetics though. We will see brain implants enhancing "intelligence" before we see CRISPR gene editing making people smarter.
I dare say we will also have smart glasses using AI and computer vision to help us be more "intelligent" about everything we do before then too.
[+]
Serveur: Fenrir
Game: FFXI
Posts: 207
By Fenrir.Brimstonefox 2025-04-24 08:35:15
AI/ML is just advanced pattern recognition. Sometimes its amazing, sometimes is crap. It can't think, in fact a lot of the models are so non-deterministic slightly modifying your input syntax (whitespace/punctuation/using a similar noun/verb/adj.) can completely alter the answer from total crap to mildly useful.
Asking them coding questions you're almost certain to find 90% of the answer somewhere on stack overflow, amalgamated with 10% of some other thread.
Any "emotion" or "thinking" attributed to these models is simply them regurgitating portions of input data. Be it from <insert favorite book about philosophy/science/textbook/novel/poetry etc..> its just spitting out patterns found there.
[+]
By Afania 2025-04-24 08:47:47
So yes, this is happening. It may not be across all industries, but it is occurring and it's due to perception, misunderstanding, and aggressive marketing.
I understand there are many people who exaggerate on AI's capabilities and it certainly doesn't replace senior employees. Which may contribute why you view it so negatively.
However, I feel whenever people said AI it kinda become polarizing because of this.
About an year ago I was on the same camp as you. But these days I see the reality is somewhere in the middle. They are not replacing senior people, but they are great tool to assist people. I think it's also fun to brainstorm more uses for AI to increase daily productivity.
So far I've been using it to generate spec documents quickly, do financial statement related calculations, and write copy and paste boring business letters that aren't important, or write codes for simple tools to export data. None of these intern level work makes core product or service more competent. But they save time and increase productivity in daily life, so I can focus on more important and harder work.
Imo that's the current value AI atm.
There are plenty of people out there who uses AI much better than I can. I am still learning in this area myself.
And if not what would or could be? (This assumes that we are intelligent.)
Sub questions:
1, Is self awareness needed for intelligence?
2, Is conciseness needed for intelligence?
3, Would creativity be possible without intelligence?
Feel free to ask more.
I say they aren't. To me they are search engines that have leveled up once or twice but haven't evolved.
They use so much electricity because they have to sift through darn near everything for each request. Intelligence at a minimum would prune search paths way better than LLMs do. Enough to reduce power consumption by several orders of magnitude.
After all if LLMs aren't truly AI then whatever is will suck way more power unless they evolve.
I don't think that LLM's hallucinations are disqualifying. After all I and many of my friends spent real money for hallucinations.
|
|