GPT and LLMs: Good? Bad? Both?

Where we talk about modern advancements like the abacus and printing press.
Soloist
Posts: 5658
Joined: Sat Nov 12, 2016 4:49 pm
Affiliation: CM Seeker

Re: GPT and LLMs: Good? Bad? Both?

Post by Soloist »

Bootstrap wrote: Fri Dec 08, 2023 5:59 pm OK, I hope I've made my point. One useful way to use GPT is to ask it to rewrite something I have already written, make it more concise, write it a different way. I often write something, then ask GPT to rewrite it, then rewrite what GPT gave me until I'm happy with it.

Especially if an abstract for a paper needs to be 500 words or something ... or I can't quite find the right words, and I want to generate several possible things that may not be what I want either, but at least give me ideas for what I do want to write.
It’s painful. Menno would be much better off without this nonsense.
2 x
Soloist, but I hate singing alone
Soloist, but my wife posts with me
Soloist, but I believe in community
Soloist, but I want God in the pilot seat
Soloist
Posts: 5658
Joined: Sat Nov 12, 2016 4:49 pm
Affiliation: CM Seeker

Re: GPT and LLMs: Good? Bad? Both?

Post by Soloist »

Actually, I change my mind, make it required for all the political threads. Save people time and nothing would change.
3 x
Soloist, but I hate singing alone
Soloist, but my wife posts with me
Soloist, but I believe in community
Soloist, but I want God in the pilot seat
Soloist
Posts: 5658
Joined: Sat Nov 12, 2016 4:49 pm
Affiliation: CM Seeker

Re: GPT and LLMs: Good? Bad? Both?

Post by Soloist »

Bootstrap wrote: Fri Dec 08, 2023 5:59 pm Especially if an abstract for a paper needs to be 500 words or something ... or I can't quite find the right words, and I want to generate several possible things that may not be what I want either, but at least give me ideas for what I do want to write.
Wife: Are schools actually ok with this? Maybe i'm getting old, but it sounds very similar to plagiarism, and would seem very likely to be used for that nowadays. I know there would have been times when I would have loved to have chat GPT help me write things in middle/high school, but it sounds like it's just going to make students lazier. Even if you change some of it to make it your own, you are having a robot do a large portion of your homework for you.
0 x
Soloist, but I hate singing alone
Soloist, but my wife posts with me
Soloist, but I believe in community
Soloist, but I want God in the pilot seat
User avatar
ohio jones
Posts: 5305
Joined: Wed Oct 19, 2016 11:23 pm
Location: undisclosed
Affiliation: Rosedale Network

Re: GPT and LLMs: Good? Bad? Both?

Post by ohio jones »

I was reading a recent BARticle on Artificial Intelligence and Bible Translation. Incidentally, I ran it through a GPT output detector which said it was 17.82% AI; that probably accounts for the boring parts. :P
Modern LLMs like ChatGPT are powerful NLP [Natural Language Processing] systems that can write essays or web page content, but they are not optimized for Bible translation. For one thing, they are not trained on trusted translation resources. But developers can create high-quality output by providing good linguistic data, commentaries, and other reliable reference works as input, configuring software so that it does not use less acceptable sources. Additionally, although such models do not have good support for most of the world’s languages, an existing LLM can be “fine-tuned” for a new language, providing useful results with only a small amount of text from that language.
Other than its appearance in the title of this thread, I was not really familiar with the term Large Language Model (LLM). If I understand correctly, it refers to the collection of input data, often in the billions of words, that forms the underlying database which AI analyzes to generate predictive output. The larger the database, presumably, the more relevant the results. I can see how this is possible for major languages such as French or Swahili. Perhaps even with MennoNet and its 200,000+ posts, though a few of us are predictably unpredictable. But how does it work for languages such as Enga or Dizin or Banawa which have relatively little electronic text available, or perhaps the written form of the language does not really even exist and is being developed along with the translation effort? At what level is the language model large enough to be actually usable? How similar does a language have to be to a related language to adapt the model?
0 x
I grew up around Indiana, You grew up around Galilee; And if I ever really do grow up, I wanna grow up to be just like You -- Rich Mullins

I am a Christian and my name is Pilgram; I'm on a journey, but I'm not alone -- NewSong, slightly edited
User avatar
Josh
Posts: 24202
Joined: Wed Oct 19, 2016 6:23 pm
Location: 1000' ASL
Affiliation: The church of God

Re: GPT and LLMs: Good? Bad? Both?

Post by Josh »

An LLM needs huge amounts of input texts.

With that said, work is already underway to make LLMs that work off of speech instead of written text, which would mean the LLM would incorporate nuance only present when spoken. It is also much easier to get massive amounts of spoken text (it basically amounts to just collecting audio tracks from anywhere and everywhere).
0 x
User avatar
Bootstrap
Posts: 14597
Joined: Thu Oct 20, 2016 9:59 am
Affiliation: Mennonite

Re: GPT and LLMs: Good? Bad? Both?

Post by Bootstrap »

Soloist wrote: Fri Dec 08, 2023 11:42 pm
Bootstrap wrote: Fri Dec 08, 2023 5:59 pm Especially if an abstract for a paper needs to be 500 words or something ... or I can't quite find the right words, and I want to generate several possible things that may not be what I want either, but at least give me ideas for what I do want to write.
Wife: Are schools actually ok with this?
Schools are very much wrestling with what this means for academia. I think there are ways to use GPT and LLMs that enhanced education and critical thinking, I also think there are ways to use GPT and LLMs to avoid thinking at all and to cheat. And schools are going to have to figure out which is which. If students have access to the Internet, they have access to GPT and LLMs.

That's similar to other technologies like the Internet, Google, and Wikipedia. Or in earlier days, paper or video. The tools we have change the game.
Soloist wrote: Fri Dec 08, 2023 11:42 pmMaybe i'm getting old, but it sounds very similar to plagiarism, and would seem very likely to be used for that nowadays.
If I do a 500 word abstract that summarizes a paper that I wrote, and I then edit the result, that doesn't seem to be quite plagiarism. Especially if I generate several from different perspectives, then write an abstract with the best from each. If it makes my writing better, and I'm still doing the thinking, it doesn't make me feel too guilty. Should it? How is it different from working with a professional editor?

I suppose this depends partly on context. At work, I could hire an editor to do this work if I had the budget and there would be no ethical issue. And the editor could generally do it better. That's different from writing an essay for school, where it's morally wrong to get that kind of help.
Soloist wrote: Fri Dec 08, 2023 11:42 pmI know there would have been times when I would have loved to have chat GPT help me write things in middle/high school, but it sounds like it's just going to make students lazier. Even if you change some of it to make it your own, you are having a robot do a large portion of your homework for you.
For school assignments, I agree with you. I don't think it's made me lazier in the settings I use it in, though. I'm working just as hard, but differently. I think I'm as actively engaged, and I have to check everything that is generated before I can trust it.
0 x
Is it biblical? Is it Christlike? Is it loving? Is it true? How can I find out?
User avatar
Bootstrap
Posts: 14597
Joined: Thu Oct 20, 2016 9:59 am
Affiliation: Mennonite

Re: GPT and LLMs: Good? Bad? Both?

Post by Bootstrap »

Soloist wrote: Fri Dec 08, 2023 6:02 pm Actually, I change my mind, make it required for all the political threads. Save people time and nothing would change.
In political threads, I think they change the balance of power between disinformation campaigns and the ability to check the very technical details they often rely on. It's a lot faster to fact check specific things. That's helpful.
0 x
Is it biblical? Is it Christlike? Is it loving? Is it true? How can I find out?
User avatar
Bootstrap
Posts: 14597
Joined: Thu Oct 20, 2016 9:59 am
Affiliation: Mennonite

Re: GPT and LLMs: Good? Bad? Both?

Post by Bootstrap »

ohio jones wrote: Sun Dec 10, 2023 2:27 pm Other than its appearance in the title of this thread, I was not really familiar with the term Large Language Model (LLM). If I understand correctly, it refers to the collection of input data, often in the billions of words, that forms the underlying database which AI analyzes to generate predictive output. The larger the database, presumably, the more relevant the results.
There's a step between the data and the LLM. The LLM is a model of the language and the content that is constructed from this data.
ohio jones wrote: Sun Dec 10, 2023 2:27 pmI can see how this is possible for major languages such as French or Swahili. Perhaps even with MennoNet and its 200,000+ posts, though a few of us are predictably unpredictable. But how does it work for languages such as Enga or Dizin or Banawa which have relatively little electronic text available, or perhaps the written form of the language does not really even exist and is being developed along with the translation effort? At what level is the language model large enough to be actually usable? How similar does a language have to be to a related language to adapt the model?
I think there's a HUGE difference in capabilities for the top 10-20 languages, then the top 100 languages. After that, it's sometimes still possible to analyze a language if it is closely related to a larger language in the top 100 or so. But they are making a lot of progress on smaller languages, especially in the No Language Left Behind initiative (NLLB).

And a lot depends on your use case. Some use cases work better than others. And these technologies are very new. I think even people who are very familiar with these things are still figuring out what are good and bad uses of these technologies, even in specific fields like Bible translation, the subject of the article you pointed to.
1 x
Is it biblical? Is it Christlike? Is it loving? Is it true? How can I find out?
User avatar
Josh
Posts: 24202
Joined: Wed Oct 19, 2016 6:23 pm
Location: 1000' ASL
Affiliation: The church of God

Re: GPT and LLMs: Good? Bad? Both?

Post by Josh »

Bootstrap wrote: Sun Dec 10, 2023 4:48 pm
Soloist wrote: Fri Dec 08, 2023 6:02 pm Actually, I change my mind, make it required for all the political threads. Save people time and nothing would change.
In political threads, I think they change the balance of power between disinformation campaigns and the ability to check the very technical details they often rely on. It's a lot faster to fact check specific things. That's helpful.
LLMs make it much easier to crank out authentic-sounding disinformation.

LLMs don’t have any way to discern if something is true or not. On net, they make the balance of power much worse in favour of disinformation. Best-case, we get a much worse case of appeal to authority bias.
0 x
User avatar
Bootstrap
Posts: 14597
Joined: Thu Oct 20, 2016 9:59 am
Affiliation: Mennonite

Re: GPT and LLMs: Good? Bad? Both?

Post by Bootstrap »

Josh wrote: Sun Dec 10, 2023 5:52 pm
Bootstrap wrote: Sun Dec 10, 2023 4:48 pm
Soloist wrote: Fri Dec 08, 2023 6:02 pm Actually, I change my mind, make it required for all the political threads. Save people time and nothing would change.
In political threads, I think they change the balance of power between disinformation campaigns and the ability to check the very technical details they often rely on. It's a lot faster to fact check specific things. That's helpful.
LLMs make it much easier to crank out authentic-sounding disinformation.
That is also true. A lot depends on the motives and the skill of the person doing it - most technologies that can be used to find truth can also be useful for manufacturing and spreading falsehood. After all, the Internet gives us all LOTS of access to reliable facts and information, but it has also enabled a massive flood of disinformation. Most information technologies are like that.

I think it's important to say when I am using an LLM to look at something, and I do that. The people intentionally spreading misinformation do not. But they certainly have LLMs too.
Josh wrote: Sun Dec 10, 2023 5:52 pmLLMs don’t have any way to discern if something is true or not. On net, they make the balance of power much worse in favour of disinformation. Best-case, we get a much worse case of appeal to authority bias.
Actually, I don't see a lot of authority bias here. Few people think LLMs are authoritative without checking. So far, nobody on MN has said, "oh, if an LLM created it, it must be perfect, no need to check facts". But people sometimes do get insulted if we check facts on other things that are posted.

Unfortunately, a lot of people think that passionate political internet videos and such are authoritative. Either way, what you need is something that gets you TO the authoritative facts, something that can in fact be fact-checked. And LLMs can really help with that if you make sure they point you to reliable sources. Whatever the technology, give me a way to check if it's true.

Fact is, very, very few people read reliable sources on the things they offer opinions about. And most of this is beyond our direct experience or knowledge.
0 x
Is it biblical? Is it Christlike? Is it loving? Is it true? How can I find out?
Post Reply