How do you personally use LLMs and generative AI?

We’re techies and creatives here, so I’m curious how people are using and beginning to use all these new tools like ChatGPT, Copilot, Claude, Bard, Dall-E, Stable Diffusion, etc.

Personally I use it to answer questions that I believe have an agreed upon answer. Getting a well laid out answer from Bing is better than trying to decide which web page will have the best answer. I also use it for simple calculations and estimates. Example: “how long does it take to read 10,000 characters out loud.” It shows the assumptions and math, looks trustworthy.

Those are pretty basic uses though. How about other folks here, how do chatbots or generative AI tools improve your work or your personal life?

2 Likes

I have tried it in the past with basic code generation for things like forms that I don’t want to take the time to write out. It worked ok. I probably need more practice with it to get what I want out of it. I ended up rewriting most of what it gave me.

The other thing I’ve tried is for bugs I’ve had. It’s a mixed bag there. It mostly copies answers from Stack Overflow, but doesn’t actually differentiate if the answer was accepted as the resolution or not, and doesn’t seem to cross reference those answers with the actual docs for the language or framework in question, because it often spits out very bad answers that are either flat out wrong, or at the least misguided. Sometimes it likes to mash up some answers, which don’t work together. But this is Chat GPT and Bing’s CoPilot. I haven’t tried some of the code specific LLMs yet, like Devin. They might be better. Having said all of that. I did have one very persistent bug that I wasn’t finding answers for on Stack Overflow that Chat GPT did have the answer for, somehow.

I am thinking about also trying it for a game I’m developing to help me write out the rules in a more cohesive style. I used to do some technical writing, but I’m a long way away from it and I kind of just want it to be done without having to go through and think about it so much.

2 Likes

That reminds me, I’ve been meaning to use it for generating RegEx for some search and replace queries. Reddit tells me it’s good at that. I should probably just learn RegEx myself, but until I do (or perhaps while I do) ChatGPT to the rescue.

2 Likes

Honestly, it’s not worth learning entirely. You can get by without it, or just learning super basics, and when you run into a place you need it, look it up for that specific thing and copy/paste.

2 Likes

I’ve tried it, and found it tedious/frustrating because of the time involved in fixing errors — but I’m trying to get some stuff done in a way which no one else seems to trying for, so there isn’t a large pre-existing dataset for it to crib from.

I did have fun with one prompt though:

Create a Balrog Princess with Princess Leia buns (hairstyle) which would delight a 4 year old

“Jabba the Hutt’s Bane”

5 Likes

I use Microsoft M365 Copilot across all the core suite. I use it in Outlook to summarize long threads that I’m looped into, saves me at least 10 minutes of reading. I use it quickly write emails that I edit. I use it to take concepts such as maturity models and frameworks, that we have outlined and flesh out and more complete products. Across Word and PowerPoint, I use it as I would use a jr. consultant and have it get me 80-90% of the way there on a piece of content or report. In Excel I’ve used it for Pivot Tables and Analysis. I always use Human in the Loop, from me editing, and handing it over to my editing team as well. I used ChatGPT 4 to write my business case to get a Copilot Pilot rolled out across my org.

I’m seeing giving Copilot to my consultants I can see 1.3 to 1.5 FTEs in productivity gains.

Professionally, we have used OpenAI LLM to build tools for Design Work, Engineering, etc…

6 Likes


Here’s a major use right here. I also use ChatGPT for editing and revision. :wink:

1 Like

So far none of them seems to have a decent handle on Japanese orSwedish so utterly useless för me.

3 Likes

I have used ChatGPT to write first draft copy for our summer music camp. It still took a lot of editing to refine it match our particular offering, but it was a good quick and dirty start to bypass writer’s block and gave us a decent structure to work off of.

2 Likes

As I think I’ve stated in other threads, we along with some of our customers, have been experimenting with AI (primarily chat GPT) as business tools.

One area we have already had significant success with it is using it for level one tech support cases and our success rate there is that I can deal with around 77% of the cases without human intervention or escalation.

The other area that we and some of our customers are seeing good results are with complex decisions such as where/when to open a new business office or conversely to consolidate offices/staff to reduce costs and/or improve operational efficiency.

For one example, our customer used AI to decide which of three potential sites to open a satellite sales office.

I know that is exactly the kinds of stuff that some of the big critics fear, but they need to remember that this type of analysis has always gone on in any company regardless, in ways like analyzing P&L statements and other tools to quantify things like operational efficiency.

BTW: One major tip/trap that we have fallen in to which actually reduces it’s success/effectiveness is the tendency to talk to it in computer terms and structure queries more like traditional relational databases for instance.

EG It works better when you approach it almost as though you were talking conversationally as a person. I know that will gall/and or disturb some, but OTOH it does significantly improve results.

The other secondary tip we’ve gotten is the extreme importance in properly selecting and managing your data sources. When you do that, it virtually eliminates hallucinations and totally out there results.

Last but not least, in the near and perhaps mid term we can already see the need to develop and hire resources specifically to manage/maintain/develop the related AI Tools. EG. at this point we are in the prcocess of developing and creating AI specific staff roles, somewhat like we did earlier on as we built out an IT staff.

I think some of the more starry eyed appraisals of AI come from these type of carefully managed and qualified work with AI.

Interesting times ahead regardless, but any company that isn’t actively evaluatiing and possibly investing in AI runs the risk of being left behind by it’s competition.

4 Likes

@Desertlap - your observations are real eye-openers because they validate my own observations and core concerns:

  1. When I ask Lexis AI questions in simple terms (Who inherits from a deceased parent with multiple marriages?) I get near summer clerk level responses - really a pretty good benchmark given the vaugness of the question.

  2. Since Lexis ONLY trains on its proprietary databases, I get no hallucinations (so far) - but sometimes I get very basic answers as well.

  3. Your conclusion really worries me as a solo/small firm practitioner - the big boys will really have a leg up. Will we see end-user models we can train on our own data source that never leave our computers anytime soon? For example, I have tons of files, notes, research, forms gathered over 40 years of practice that would like to be able to query without concern my data (or its conclusions) go to OpenAI/Google/Microsoft/Apple? If this is even possible; will I need a Cray computer?

1 Like

Another decidedly pedestrian use, but now that ChatGPT has removed the requirement of providing your real phone number to use their app, I finally joined.

This morning I enjoyed verbally asking facts about an upcoming travel destination, and getting a very natural sounding response in the voice of a Scarlett Johansson sound-alike. Very nice, the future is now!

This is already possible afaik. With Copilot 365 you can enable settings that keep proprietary data stored and analyzed locally. Still challenging for the end user to figure out how trustworthy those claims are though.

3 Likes

@Desertlap - Agreed, especially with ChatGPT 3.5 and Copilot, we find interviewing the AI gives the best results, putting it on enough rails to keep focused but not so tight that it loses any creativity.

@dstrauss - your analogy is spot on, I expect the AI to fulfill the role of a jr. level consultant or developer, I still need to work with what I get. My expectations is the AI will get me 80-90% of the way there.

2 Likes

This is my concern in the developer world. If jr roles are mostly overtaken by AI, there won’t be anyone coming down the pipeline to inherit the mid and sr roles to translate and work with the AI results. Right now we have enough developers in various stages of their career to cover it, but in 5 years, are people still going to be looking at this field as a viable career choice when AI and Website building tools like SquareSpace can get you most of the way there without the need to know code?

4 Likes

That seems from what we’ve heard anyway the direction that Apple is headed which is putting the primary AI processing duties on the users device versus the current anyway cloud approach of chat GPT/copilot.

It’s a much better privacy message for one thing , but it also has the benefit of being able to manage it with things like data sources.

OTOH both Intel 14th gen and the Snapdragon Elites prominently feature AI NPUs so MS has at least the option to moving them locally.

And I think if anything this will at least somewhat level the playing field between big and smaller players as more and more it seems, the skills of the “interrogator” are much more important.

2 Likes

BTW at it’s core, AI is both brute force narrowly computationally intensive, and of course data driven/starved.

EG, the faster the “if this then that” computations can execute, run against as large as possible of a data set, the better the results.

Which is why how at least at the moment RISC based architectures (especially those that have typically been used in graphics solutions like Nvidia’s have the edge, think shaders for instance) as RISC design inherently does a very few things as fast as possible versus CISC which prioritizes as capable as possible instruction set per clock.

I’m sure you know, but these recent models are far from sequential “if this than that” type code, instead running massive matrix multiplications, where the matrix elements effectively hold tiny fractions of information from training data. Anyway, carry on. :slight_smile:

2 Likes

You are correct strictly speaking though in my mind they are just smarter more sophisticated If this, then that calculations. eg. they are more like "if this, then that, but also possibly this or this or that, with statiscal probabilities of each “that” ranging from 0-100%

eg. they now can “branch” in multiple ways

1 Like

PS: One major “aha moment” for me came when talking to one of the developers he told me how much “game theory” is integral to the design.

2 Likes