Everyone knows that artificial intelligence makes our lives easier at work. Whether it’s for ideation, performing rudimentary tasks, or doing research, this technology is being adopted by 68% of workers in their organisation.
Interestingly, those employees also don’t inform their managers about their AI usage, which can raise many questions and concerns. Instead of arguing about the pros and cons of AI, companies need to begin creating policies to support its usage.
Technology is advancing and neglecting this will only place your team at a disadvantage. So why not learn how it works, implement policies, and train your employees on best practices?
It Starts From The Top
Nearly 75% of CEOs believe AI is a top investment priority.
This shows us that the majority recognise its importance and companies that don’t keep up will be left behind. CEOs and managers need to train their employees on AI literacy and upskill them on the equitable use of it.
Let’s take a marketing and legal team for example.
Your marketing team might be upskilled in the usage of social media ad creative AI for design and copy purposes, or writing tools to assist your copywriter. On the other hand, a legal team will be restricted on how they can use AI since inputting confidential information would be a bad idea.
Sounds obvious right? But mistakes are bound to happen.
Many employees aren’t provided with training and are learning how to use AI in their own time. There is no oversight from managers or internal courses, which means employees may not be as effective in using AI; ultimately, impacting their productivity.
The bottom line is it all starts from the top.
Learning Effective AI Use
Most companies fail at teaching AI.
The ones that do teach, are more concerned about protocols and policies to ensure an employee doesn’t risk their data, hence the training revolves around AI safety. But if you’re looking to make effective use of AI at work, we recommend focusing on prompt engineering.
Without the fancy lingo, prompt engineering is understanding what text needs to be inputted to deliver high-quality and specific results from a tool like ChatGPT or Bard. It’s all about “how” a question is being asked.
Let’s give an example of a well-constructed prompt compared to a poor one for writing a blog article.
Poor prompt:
“Write an 800-word article on the impact of artificial intelligence on the modern workforce.”
This is a poor prompt for many reasons and will produce terrible work if you plan to copy-paste its output. A good prompt involves multiple steps, using personal research, providing more context, revising the prompt to fill gaps, and providing creative direction.
VS
Good prompt
“Generate an article outline for the topic “x” in a business magazine style format for entrepreneurs.”
Now we’re presented with an outline of subheadings that you can select based on your article’s vision. Based on your personal research, you can now begin the writing process.
“Use a conversational tone to write the introduction to this article and make it start with a lead-up question. Cover points “x” and “y’ while including the keywords “a” and “b”, along with other contextual longtail keywords.
We’re not going to go through the entire process of writing an article, but you get the picture. Keep in mind, not every AI tool can pull updated or live information and you will need to adjust your prompts accordingly.
The tool isn’t a mind reader and it takes the following factors to get positive results:
- Clarity of intent
- Context awareness
- Specificity
- Open-endedness
- Personalisation
- Error handling
- Multimodal inputs
Understand Strengths and Weaknesses
Every AI tool has its pros and cons.
In an office environment, it’s likely that you’re either using ChatGPT or Gemini (formerly Bard), but there’s a time and place for using one over the other. Here’s an example:
Recent information/news:
Perhaps you’re writing about legal changes or asking a question about Google Analytics 4 to an AI tool. In this scenario, Gemini would be the preferred AI tool of choice because it pulls its data from Google and is constantly updating itself.
Whereas ChatGPT’s knowledge does not exceed past January 2022. However, keep in mind that all AI tools will still have weaknesses:
- Factual accuracy
- Susceptibility to bias
- Contextual understanding
- Creativity and originality
- Speed and efficiency
- Consistency
- Scalability and adaptability
- Insightful analytics
- 24/7 availability
- Task automation