[TriLUG] Discussion: Using LLMs the Right Way: 10/1/2025 7pm Eastern Daylight time
Steve Litt via TriLUG
trilug at trilug.org
Tue Sep 30 21:15:15 EDT 2025
Hi all,
Where: GoLUG Online: https://meet.jit.si/golug
When: Wednesday, 10/1/2025 7pm sharp Eastern Daylight time
Arrive 15 minutes early for Microphone check & discussion
Who: Steve Litt, Troubleshooter, Developer, Tech Writer
What: Discussion: Using LLMs the Right Way
As a regular inhabitant of LinkedIn, I see an ever increasing bunch of
BS concerning topics AI and Vibe Coding (the ultimate undefined phrase).
There are those people scared to death that their developer jobs will
be completely taken over by Large Language Models (LLMs), and out of
panic and fear they trot out the old tropes about "vibe coders" having
ChatGPT or Claude spit out a 5K line application and deploy it live, so
that an army of $200/hr consultants can come in and fix the mess.
Then there are the self-proclaimed "realists" proclaiming that human
developers are obsolete, you'd better accept it, software writes
software that writes software that writes software, and it's better
code than anything written by a human, So become a business
executive. Perhaps this will be true someday, but it's nowhere near
true today.
Just today I saw a LinkedIn post blaming "AI" for permanent brainwave
alterations and inattentiveness. Well yeah, if you do any activity
wrong: Listening to music, watching TV, programming, writing books,
weightlifting, taking vitamins or making money, it can change your
brain, and often not for the better.
You just gotta love these guys who spend valuable brainpower worrying
themselves to death that job applicants are using AI to answer the
interview questions. If the applicant comes up with the right answer in
such a time and emotional crunch, why do they care if the applicant used
a tool to do it? Unless it's a matter of the interviewer not knowing
whether the answer is correct, in which case maybe it's the interviewer
who shouldn't have a job. Anyway, if the applicant can quickly come
up with correct answers using his AI tools, imagine how productive
he'll be on the job. Work isn't a closed-book activity :-)
Here's the truth: LLMs are a tool. A very powerful tool, but just a
tool. They do a big portion of the job very fast, but they don't do the
whole job. As of 2025 you still need human troubleshooters, you need
people who understand how to write readable and modular code so it's
maintainable even when the LLM can't do the job, you need somebody to
interview and specify. Imagine the ruckus laborers must have made when
backhoes were invented. A backhoe and its operator could out produce
ten strong and skilled ditch diggers. But if you look around, at every
jobsite using a backhoe you'll see one or two guys with shovels to get
the last few inches dug around pipes, etc. The project still had an
architect and an engineer. A backhoe can't do the whole job, and
neither can an LLM. Very few people on LinkedIn stop to think about
this. LLMs lead neither to heaven nor hell. They're a tool. A very
powerful tool, but just a tool.
And by the way, LLMs are hugely helpful in learning new things, and
I've found that interacting with them also helps me discuss things with
humans.
At the discussion I'll briefly reveal how I use LLMs to help me develop
software, learn new things, and turn them into a high quality,
lightning fast lookup software or service manual. Then others will
reveal their tricks, tips, and policies of using LLMs. We should all
come away with a better understanding of how to use LLMs as a tool.
Hope to see you there.
SteveT
Steve Litt
GoLUG Publicity Coordinator
More information about the TriLUG
mailing list