That public relations and communications are now global practices is nothing new to PR practitioners. Global PR means more now than it used to; gone are the days when this was a term only used to describe large, multinational companies.
Companies large and small, from high tech to farming and everything in between, are now global. Whether that means they have customers around the globe or just the possibility of a viral video being seen by millions from New York to Hong Kong, PR is global for almost every company in some manner.
PR measurement continues to be considered a high priority, and yet its adoption and penetration numbers seem to be flagging. There are many reasons cited as barriers to PR measurement, but I think many are waiting and hoping there’s a technological solution just around the corner.
As anyone who has ever used automated sentiment analysis on a project knows, there are limitations on how much we can expect computers to do right now. While automated sentiment has improved over the years, it’s still not tremendously accurate.
In part, this is due to the way that many automated systems parse an article: the software scans the piece looking for words that are negative in proximity to the keywords established for an account. This isn’t a particularly nuanced process to begin with and when you start throwing in other factors like sarcasm, regional language differences, slang terms, and emojis, automation will struggle to provide accurate results.
Language and Learning
The issues that AI struggles the most with are learning context and comprehension, and complex things like abstract thought. A 2016 piece in MIT’s Technology Review provides great background and insight into AI challenges when it comes to language. The piece details advances and setbacks, and in reading it one quickly begins to understand how big the task of asking computers to read and understand languages really is.
Consider Google’s efforts in this area. In 2016, the company released code that is designed to parse and understand syntax in English-language writings. The software, perhaps predictably named Parsey McParseface, works with SyntaxNet to break down sentences in English and understand them. McParseface identifies the English words and SyntaxNet splits out the syntax of a sentence. The two components then arrive at a meaning. This can be fairly straightforward for simple sentences, e.g. “Jack ran home.” But as anyone who has ever had to diagram a sentence knows, longer sentences can have levels of ambiguity and meaning that aren’t as easy to break down.
Still, Google asserts that the program has an accuracy rate of around 96-97 percent when parsing newswire stories, which is remarkable. It is, however, limited to English—and is still being worked on. Given the global nature of communications work, this still leaves much of the globe working on solutions.
Regional Dialects and Punctuation
Even staying within the boundaries of the English language, AI can struggle. While the Google program showed a high rate of success for understanding wire stories, people don’t speak like wire stories; human language is complex. Think of how a simple typo or punctuation change can alter the meaning of a sentence. Take this classic as an example:
The children said, “Let’s eat, grandma!” versus The children said, “Let’s eat grandma!”
The presence or absence of one comma changes the meaning from excitement to cannibalism. Missing punctuation aside, most humans would assume the first meaning, with or without the comma. Software, at least for now, does not have the benefit of understanding the whole of the human condition on which to levy that sort of distinction.
Another area where the gulf of readiness between AI and actual understanding is evident is in spoken-command AI. In our home, both Siri and Alexa have routinely disappointed. YouTube is home to a fair number of videos showing frustrated Scots attempting to communicate with Alexa; it’s almost a subgenre.
Countries are tackling some of these issues, most notably the ones that are already leading in AI development. An article in Japan Times discusses how AI researchers are working to train AI to recognize regional dialects in an effort to improve medical outcomes. Again, in reading the piece, it’s quickly apparent at how much effort is required—and how far we still have to go.
AI for Communications and PR
Public relations and communications work is already benefiting from the use of AI—but it isn’t going to replace traditional measurement and metrics anytime soon.
That isn’t to say that the use of AI in PR isn’t helpful; it is. The primary area that AI is currently being deployed in communications work is on the administrative side. Chatbots can handle routine customer care queries, and AI in email can prompt suggested responses or quickly add scheduling items to your calendar. These are smaller steps, to be sure, but that does free up time to do measurement.
Eventually, it’s likely that we will see AI that can parse the language of an article, and then correctly assess sentiment, topics, and layer additional understanding and context by looking back at previous coverage. It might be a while before this is available at a price point that will make communicators happy, and longer still before it’s universally available to parse content in all languages around the world.
Until then, we have to rely on humans to provide this level of insight, particularly on a global scale.
Latest posts by Jennifer Zingsheim Phillips (see all)
- Video of the Month: Make Time to be Bored - August 17, 2018
- Ongoing Education in PR and Communications: What is Necessary? - August 7, 2018
- Your Communications Measurement Reading List for August 2018 - August 1, 2018