The world has been buzzing about AI and ChatGPT in recent months. I have written quite a bit about the dangers involved from an SEO standpoint. I work with a number of clients in the world of health, and it can be even more dangerous in those scenarios. I attended Pubcon in Austin a few weeks ago, and of course, the industries surrounding digital marketing are all learning new things every day in terms of using AI as a tool and understanding how to optimize for search engines that now incorporate it into their algorithms. Both Bing and Google had their public faces as keynote speakers to talk about AI and their individual algorithms. One story popped up that really drives home why AI can be used as a tool, but always with human oversight. With so many sites in healthcare that I look at regularly, this grabbed my attention.
Pulling Together Popular Opinion is Bound to Go Wrong
I won’t get into who said it, but in an open discussion at the conference, someone asked about interesting things people had found while using ChatGPT. One example popped up about a very dangerous situation. The search was about safe levels of oxygen for scuba diving and the AI response, if followed, was potentially deadly. This gets us back to where and how the AI tools pull their information. In many cases, getting it about right is fine, but in the world of health and finance, and I’m sure many others, it can be extremely dangerous to put out something as a fact that is just flat-out wrong. The latest news of layoffs at Microsoft suggests that maybe they had some other serious mistakes going on. We hope that this is the case and not that Microsoft just isn’t worried about ethics anymore.
The point is that AI in search results typically grabs top-ranked content and mushes it together to try to provide the user with an answer. It can definitely be helpful and effective as long as the information it is pulling from is accurate. This if of course not always the case. Search engines rank traffic based on many different factors. Google will even tell you UX, or user experience is the top ranking factor. I’m sure they would say accuracy is part of UX, but in a way, just as our news media has gone, there is an aspect of what gets the clicks and attention, and gets more rankings due to more poeple going to those sites and other sites linking to them. We all know there are a lot of popular sites out there that spew falsehoods quite regularly. So be careful when using AI to write content around health topics. The search engines typically frown upon ranking AI content, even if they are providing it themselves. This is unfair in general but helps them make sure they are feeding into their AI good quality content. If the source of their AI is more jumbled AI, then you can see where that may go downhill fast.
Comedy is a Great Example of AI at Work
I have always been a big fan of standup comedy, and comedy in general. Always nice to be able to laugh. I have been taught that from a young age. Vulture recently put together some established standup comedians and had them take a look at ChatGPT’s attempt to copy their style and jokes. I think it is actually a great example of how it all works. Much like AI tends to take the creativity and uniqueness out of content and makes it pretty generic, it does that to the style of these successful comics, essentially reducing their comedy to dad jokes that don’t really represent their style and humor. It still makes jokes and may even get a laugh, but in the end, it isn’t what you were trying to get. You can start with AI to help save you some time with many different writing tasks, but you need to know your subject area and the rules of the environment you are working in, and then shape and add to your content to make it useful and viable. Don’t get lazy and rely totally on AI. It not only isn’t funny, but it can also be deadly in the world of healthcare.
Leave A Comment