Viewing a single comment thread. View all comments

grafknives t1_jcqgthm wrote

Ok, then it is quite impressive.

HOWEVER! Why dont use chatGPT to generate text for ALL the parks in in the world? This will be ONE TIME work.

I bet you could work it better with programming, but it could be done "by hand".

−8

Delioth t1_jcqpwop wrote

Chat GPT is incredibly confident and oftentimes plainly wrong, with little way to tell which is right and which is wrong, is certainly part of why

6

grafknives t1_jcr0mpp wrote

But in this case the authors NEED some source of information of every park in the world.

  1. Building a community - best, but really really hard. Like, extremely unlikely.

  2. scrapping some data from the net. Easier, but quality and availability of data is less reliable.

  3. use AI. lowest quality, but available ON HAND - here and now.

0

Delioth t1_jcvlr9f wrote

When you're giving people information that may be used for e.g. travel planning... It's better to say nothing than to say something factually incorrect.

1

grafknives t1_jcx8sva wrote

How can you make sure that open street maps are correct?

1

Delioth t1_jcydx79 wrote

You can't, aside from finding corroborating sources. But we've been using things like MapQuest and Google maps for some time now, enough to trust that they're usually correct. Sometimes missing closed roads or doing a weird direction, but by-and-large correct. Chat GPT and such have precisely the opposite though. There's a few times it's correct and they're certainly cool, but it's also as confidently incorrect as that crass uncle everyone seems to have.

But I need you to recognize that "this map tells you there's a road to turn left here but there isn't one" (so you go another hundred feet and turn left) is different from like "experience nature's beauty with the falls and oaks at parkname" when the park has neither of those things. We've a track record that maps are usually pretty correct. There's no such record for AI chatbots, and the evidence we do have shows their flaws.

ETA: I mean, I asked chatgpt to tell me about a state park in my hometown and it's not even close to accurate; it claims it's on a lake (it's not), has a nature center (it doesn't), claims the park has showers (it doesn't), says the park is good for bird watchers because of the waterfowl and herons (which would probably be accurate if the park was on a lake). Gpt got a few things right... in the same sentences it got stuff wrong, often. Why it was named, the size of it, the fact that it has hiking and camping.

1