man behind laptop

L. Frank Baum was way ahead of his time when he led Dorothy to expose the Great and Powerful Wizard of Oz as a charlatan. The Munchkins believed that the wonderful Wizard of Oz who had suddenly landed in their lives was a magician – he could summon thunder and lightning; he could create arcs of electricity; he could even make a humanoid face appear on a screen and compose speech (do you see where I’m going with this yet?)

But when little Toto pulled back the curtain, we all learned that the Wizard’s power wasn’t really magic at all: it was a combination of very good technology and an audience that wanted to believe in magic. And when people really want to believe in something, they have the ability to suspend their doubts; to not try and figure out how it works; to simply accept it as beyond their comprehension. The curtain is always there and sometimes people decide not to look behind it.

Besides being a beloved fantasy story, Baum’s work was also a commentary on the politics of the late 19th century. He would probably be surprised to learn that it also can be used to explain the seeming magic of many AI applications in the early 21st century – and the resulting slack-jawed suspension of rational thought on the part of potential users and buyers. Fast forward 100 years (the book was published in 1900; the MGM movie production debuted in 1939) and the curtain has simply been replaced with an LCD screen. Just as in the original story, today’s awestruck audience is enthralled with the magic of AI, and the wizards behind most of the platforms aren’t in any hurry to dispel the aura.

Think of some recent demos or videos of AI applications that you’ve experienced or seen. Everything from computer vision in autonomous vehicles to robots that can solve a Rubik’s Cube® in seconds to an app that can scan a pile of random Lego® pieces and suggest what can be built with them. We’re captivated by the magic – and we usually fail to realize that behind every one of those applications is a veritable army of people who painstakingly labeled and annotated still images, video footage and other types of unstructured data. It’s the accuracy of that labeling which provides the critically important training data for an AI model so it can then ingest images it hasn’t seen before – and make sense of them. 

Data labeling is absolutely necessary for the success of any artificial intelligence or machine learning or business intelligence implementation. It’s also tedious, labor intensive, and time consuming. So it’s often something that gets minimized during many platform installations. In many cases the work is assigned to a data scientist or a data engineer (after all, it does contain the word ‘data’…) who usually doesn’t have the necessary time to devote to this kind of effort. Since these individuals typically lack the experience of hands-on data labeling, the project takes longer and may not be as accurate as the output of someone who does this work for a living. And it should come as no surprise that ‘shortcuts’ are fairly common: reducing the number of images that are labeled; making rough approximations (instead of neat bounding boxes) to identify subjects; using off-the-shelf image sets instead of samples from actual use cases; and, worst of all, using images curated and labeled by another AI model.

Data labeling is such an important function because it’s the key to utilizing unstructured data in an advanced technology application. Yet a large number of implementations fail to comprehend unstructured data in their deployment. Even those providers who claim to make all of a company’s data available usually exclude unstructured data (which leads to an interesting question of what ‘all’ really means). And that leads to a disturbing result: the majority of companies that invest in an AI/ML/BI platform are initially disappointed with its performance. The disconnect is very rarely due to the platform; it’s generally not the fault of the users – the investment doesn’t provide the expected returns because vital parts of the company’s total data inventory were unknowingly left out. And no model – no matter how clever it appears to be – can learn from or offer insights on data that isn’t available to it.

While it might be easy to blame the Wicked Witch for the omission of image, videos, audio files, .pdf documents and other types of non-tabular data, the real reason is much less sinister. Although unstructured data makes up 80% of the new data produced worldwide each day, historically it hasn’t been considered a vital source of information. Clients are generally concerned with SQL tables, SAS datasets and other ‘row-and-column’ sources, and platform providers are reluctant to disrupt their sales cycle by introducing project variables that weren’t requested by the prospect. So the topic doesn’t usually even get discussed – until the installation fails to deliver on its promises.

The next time you’re captivated by some AI “magic”, remember that the best applications rely on people working behind the curtain to turn images and videos and other non-standard types of data into the structured datasets that power those amazing models. When planning your own advanced technology implementation, it shouldn’t take a house falling on you to remind you to include your company’s unstructured data in the build.  And be sure to find an experienced service provider with a highly skilled Human-in-the-Loop team to accurately and efficiently label all that data – this isn’t the time to call in the flying monkeys.

Let’s Talk

Group of business colleagues communicating on a meeting in the office.

Elevate AI/ML now with DataInFormation for company success.

Contact Us