ChatGPT: Let the AI disruption begin…
There are a few times in my career that when exposed to a new technology I became giddy with excitement at the knowledge that I was interacting with a truly disruptive force, the impact of which would eventually impact every industry, business, and life as we know it. One was circa 1994 when I was first exposed to the Internet through a Mosaic browser. Another was when I held an iPhone for the first time. Another still was the first time I drove a Tesla. The most recent was when I first interacted with Chat GPT a couple of months ago.
When I see a technology that is game-changing, I want to tell everyone about it. I want to show it to people. The desire to teach (something I inherited from my parents) comes out and I want to both learn it and teach it simultaneously. In an effort to spread the use of ChatGPT within our company, encourage experimentation and get people’s wheels turning on how this new tool can improve and simplify our jobs, I decided to sponsor a contest.
One of the obvious uses for ChatGPT is article writing. The content is original because AI works (simple explanation) in a similar way to the human brain. First it “reads” lots of content and “learns”, then when asked a question, it “summarized” what it knows. But it does not directly quote paragraphs of other people’s work per se, it writes in a manner you might expect a human writer to write (but a lot faster).
Using ChatGPT, it quickly becomes apparent that the output is very high depending on the quality of the questions you input. Asking, “describe Type 2 diabetes” will give you a brief paragraph about Type 2 diabetes. But asking, “Give me a 425-word article on the epidemiology of type 2 diabetes and the latest standard of care treatment options and use citations and references”, will give you something much more specific in article form.
These articles do need to be proofed. There are many stories about ChatGPT producing fictional information and citing incorrect or inaccurate falsehoods as fact. The default speech reads with sophisticated confidence that makes it easy to believe what you are reading is true, though that is not always the case.
Another noteworthy observation is that there are biases in the output. Not only biases based on the content in which the AI has initially read and learned, but biases introduced by the very nature of the questions you ask it. For example, when asked the below two questions (note I change only one word, bolded for emphasis in both questions), the output is very different with ChatGPT effectively taking both sides of the argument much like two high school seniors assigned to make a case on a debating team.
In each case, ChatGPT writes an article (see links) that does exactly ask it was asked. To pick apart the strategy as foolish or to endorse it as brilliant. So the question itself can lead to tremendous bias, much like a poorly written market research question can introduce bias into a survey.
Our contest had two parts. Each had a generous cash prize for first and second place.
The first part was about quantity. Following some preconstructed guidelines and formats the objective was to create as many articles as possible for publication on some of our owned & operated websites.
The second part was about quality. The object was to create the article that receives the most views within 30 days after publication.
So how did our contest do?
Well, our winner produced about 133 articles. Second place winner did 84. Both reported that their total time invested was “about 2 hours” Thinking of quality questions can become harder over time as one of our rules was to not repeat yourself (all original content). This shows us that everyone will utilize these tools with the same efficiency, nor have the same ability to ask quality questions. The jobs of the future may go to the ChatGPT experts.
As for which article had the most views… we’re still going on that one. We still have a lot of content to review and publish.