The GIGO Principle in Machine Learning
And its implications for PMs, designers, salespeople and data scientists
Garbage-In-Garbage-Out is the idea that the output of an algorithm, or any computer function for that matter, is only as good as the quality of the input that it receives.
The principle underlying GIGO is essential when it comes to the real world deployment of algorithms. And with the increasing usage of ML in everything from public-facing APIs to the underlying services that power public-facing applications, awareness and assimilation of this principle is as important now as it has ever been.
Let me demonstrate with an example
One of the traditional ML challenges in ecommerce is the categorization of consumer products into categories, given product metadata.
Consider a taxonomy of just two categories, say “Electronics” and “Footwear”. The challenge involves mapping product data to categories as follows:
How would you solve this problem with machine learning? Here’s a standard approach:
Simple enough … right? Not quite. Because this is when the GIGO principle rears its head. In production use, your model may start seeing examples like “Apple Watch Nike+”, “Nike” and “Air Jordan”.
Note that these are examples in which the amount of data contained in the input is less rich and more ambiguous than the examples originally provided. Yet, the answers are reasonably discernible to humans upon closer examination.
You may find though that your model returns nonsensical outputs when posed with such queries. This is likely to happen if your training dataset is made up entirely of well-constructed product entries, and has never had to encounter examples of poorly constructed inputs. The model begins to expect a certain structure in its inputs, and in the absence of this structure, it may find itself in neighborhoods that it has never visited before, leading to nonsensical results.
This is an example of the GIGO principle at play. A model is only as good as the examples that it has been trained on; provide it with examples that veer from its standard template (i.e., garbage in the model’s eyes) and it will give you garbage outputs. After all, models don’t learn to solve problems per se — they learn to minimize loss on training datasets and thereby deliver generalizable value incidentally.
Which leads us to the question … what can we do to overcome GIGO barriers?
Look to build models and systems that are resilient. Maximizing precision/recall/F1 on a given dataset is not the end of it. Sniff out all hypothetical and practical variations in input data that your model might encounter in the real world. Seek out insights from product managers, customer success execs, salespeople (more on this below) and anyone with ears close to the ground who can enhance your perspective. Then, ensure that your validation datasets sufficiently reflect these realities.
If required, build multiple models to tackle all the different scenarios identified, and track metrics for each of the models separately. At Semantics3, for some APIs, two different inputs to the same service may trigger two completely different models behind the scenes, with the choice of the model in turn being dictated by higher level heuristics or models.
Product Managers & UX Designers
Along with building better models (something that is always work in progress), it is important to build better client experiences. This is particularly important when the end user is a paying customer. In this regard, PMs and UX designers have significant roles to play.
The first line of defence is to align customer expectations with capabilities through lucid API design, responses and documentation. Traditionally, APIs have been deterministic — when you interact with resources via the Twitter, Twilio, Stripe or Foursquare APIs, there’s a clear expectation of what actions should be carried out behind the scenes. But as a new wave of non-deterministic algorithm-centric APIs emerges, there arises an additional element of subjectivity that makes the challenge of building APIs trickier.
The second is to act as a bridge between the customer and the data scientist to ensure that the latter is directed towards failure scenarios that haven’t been accounted for. Real-world precision rates that customers see could be very different from the validation numbers reported by data scientists, and someone has to act as an intermediary to keep the two in sync.
Salespeople & Customer Success Execs
When the client in question is an end-user of the SaaS/Enterprise mold, the onus lies on salespeople and customer success execs to keep expectations aligned. If you work in these capacities, be keenly aware of subjectivity in the services that your company provides. Then dig deeper into your customers’ needs to see how their requirements will tango with these subjective components.
At Semantics3, we build internal demos of our public-facing AI APIs, and give our business teams first crack at improvements and features that we build. Salespeople tend to be a demanding group, since they’re the ones who have to ultimately empathize with and showcase products to customers, so there’s nothing like a good teardown from the sales team to unearth GIGO pitfalls.
In sum, if your company plays a role in delivering products that rely on solving subjective problems through algorithms, keep an active watch for GIGO limitations. Look for mismatches in expectations and performance between what your clients/customers experience and what your algorithms are advertised to do. If you believe that customers are or could be affected by such issues, ensure that your entire team stays vigilant to probe and tackle these problems and keep your customers satisfied.
At Semantics3, we work on ecommerce problems such as product matching, unsupervised extraction, categorization and feature extraction from unstructured text. If you’d like to join us, drop us a note.
To get access to our AI APIs for Categorization, Matching or Feature Enhancement, get in touch.
Published at: July 05, 2017