The short answer is yes! But we have a longer answer prepared as well.
For many of us, we do not think about how AI is already very much embedded in our day-to-day lives. It is a part of our mobile phones (think Siri), a website’s chatbot, the thing that makes suggestions for shows to watch or songs to listen to. It is pretty much everywhere.
In 2023 generative AI came into the spotlight and reframed our thoughts on what AI could be. ChatGPT made us see that AI wasn’t just a futuristic, robotic entity as often portrayed in the movies but something that resembles a search engine with the potential for personality (keep your eyes on X’s Grök). Plug in the right prompt, with the right amount of detail and you can have full essays, project plans, research papers, code and so much more in a matter of seconds. Soon there will be roles within organizations for people who have mastered how to effectively prompt AI for the best, most accurate return of data much like you have SEO specialists now.
But while ChatGPT passed the Uniform Bar Examination in the top 10th percentile back in July 2022, there is still plenty of room for error. Just ask the lawyer who used ChatGPT in order to prepare his filing in a routine personal injury case between an individual and an airline. The AI system returned bogus cases, decisions and citations as examples, that ended up being presented in court in their filing! This lawyer claimed that his error came from thinking ChatGPT was like a search engine, when in fact it is a generative language processing tool.
And what is a generative language processing tool? It is the interface that allows for a deep-learning model to pull from vast amounts of data it was trained to observe and learn from and then return text, images, and other types of content based on what was asked of it. When it is put like this, it does sound like a search engine. If you have a few extra minutes and are truly interested in a more in-depth definition I would check out this article from IBM. It does an excellent job explaining it all.
So back to the AI Usage Policy. It is commonly stated that humans are the weakest link in a company’s cyber defense strategies. Therefore, it makes total sense to have an AI Usage Policy in place.
What should your AI Usage Policy say?
Your policy should contain details on what is and what is not allowed with respect to usage. While it could be helpful to get together a cadence of social media posts for your company if you don’t have a large marketing department, you may not want your staff using AI to create your proprietary documents, research papers, white papers, etc. Or perhaps you allow for the foundational use of generative AI to get a team started, but you have proofing processes in place to ensure that stats, numbers, quotations, etc. are all verified before sharing with your intended audience. A list of checks and balances needs to be in place within the policy so that your staff understand the boundaries of AI use in content creation and research.
It would also make sense to list acceptable tools for use and unacceptable tools. Given that the AI industry has literally thousands of startups in the works as this is being typed that could be a very difficult list to maintain. Your policy should be treated as a living document that gets regularly updated.
Depending what industry you are in, and from an ethical standpoint, you may want to include a clause that if your staff is going to use generative AI; it must be disclosed to your audience.Around the globe governments, policy makers, educational institutions and companies are hammering out the rules when it comes to AI usage. Expect that what you may put in place today, could need to change tomorrow as ethics laws and regulations get more aligned.