‘Tis the Season: Feasting on Data

In many parts of the world, this is a season of feasts: feasts of thanksgiving, feasts of celebration, and feasts of remembrance. There are different strategies when it comes to preparing for feasts. Some people fast before the feast, hoping to somehow average out their consumption. Others double down on exercise before and after the event. Still others adopt a live-for-the-moment approach, immersing themselves in the event and sharing the abundance. During one of my holiday feasts, I had a flash of insight about the relevance of feasts to…big data, of course!  We can learn a lot from feasts about living in a world of too much data, too many choices of technology, and too many “cooks” trying to organize the data to draw conclusions. Here are some morsels for your consumption.

Greedy Algorithms: Taking Two Pieces of Chicken

A friend of mine shared with me that when he was growing up and the family had guests over for dinner, his family would always subtly pass the food to the guests first. If a guest took two servings of something, one family member would feign disinterest in the dish, ensuring that there would be enough food for everyone else. He also confessed that he ate occasionally only mashed potatoes or stuffing and had to endure one or more of his relatives telling him that he was too skinny and he should eat more. In algorithms that ingest and synthesize data, there is a direct analogy to this situation: greedy algorithms.

A greedy algorithm is an approach to maximize throughput in the context of a constrained process. Consider a time-bound process that can observe and ingest one of several clusters of information of different size or computational complexity. An example might be algorithms that choose among several pending customer service requests in a queue. If the preferred approach is to process as many requests as possible per hour, we might take the shortest, simplest inquiries first, gobbling them up to maximize throughput. This is a greedy algorithm. The downside to greedy algorithms is obvious: more complex requests can languish in the queue, possibly never being processed due to the always intervening simpler requests.

There are multiple alternatives to greedy algorithms (more complex weightings, FIFO/LIFO, etc.). Each approach has advantages and disadvantages. What is most important to remember is that it is often naive to consider only one metric (e.g. maximize throughput). Like my friend at the dinner table, you could wind up with a very unbalanced experience. As machine learning and cognitive approaches become increasingly sophisticated, we must be careful that we get what we were reaching for (e.g. two pieces of chicken) but cause unintended secondary effects (e.g. one person only eating mashed potatoes).

 Greedy algorithms can be very effective for simple, goal-based approaches. However, they can, and usually do, have unintended negative impact on subsets of the problem.

Apples and Oranges: Comparing Things that are Different

Another problem we experience at feasts is how to choose among dissimilar options. If someone asks me what kind of dressing I prefer on my salad, I have no issues picking my favorite (usually a simple vinaigrette, non-balsamic, with a little oregano if possible). It’s a lot more difficult, however, when the choice is among dissimilar options, such as selecting ice cream or apple pie for dessert. (Perhaps this problem gave rise to pie à la mode!) So, how do you choose?

Choices among dissimilar options often occur in data sourcing for big data decision-making. Consider that you only have the resources (time, money, people) to pursue one of several sources of social data for an experiment on sentiment analysis for a new product launch. One source may have more articulations, but with fewer independent speakers. Another source may have many articulations, but more complex terms of use. This problem presents a sort of asymmetric choice, which is difficult to vet on equal terms.

In such situations, practitioners often gravitate to the choice that “feels” right, without actually having the ability to explain the criteria upon which the decision was made. What happens when the character and quality of the information changes? There is no way to know when to switch to something new because there is no definitive way to articulate why the choice was made in the first place.

A best practice is to adopt a multidimensional, factor-based approach to vetting the value of various choices. Factor analysis allows for the identification of several factors that contribute to an overall outcome. By using various statistical or heuristic methods (e.g. watching behavior of experts over time), it is possible to construct relative weights of the various factors involved in making a choice. At this point, it becomes possible to make choices among dissimilar options by observing the existence and magnitude of various weighted qualities, thus creating a common way to compare dissimilar choices.

 Beware of choices made among dissimilar options. Establish clear decisioning criteria and measure changes over time to avoid indefensible analytical frames of reference. 

The Gravy: the Data We Use After We Made Our Choices

One of the big arguments I can remember growing up was the difference between sauce and gravy. I am not a culinary expert (just a consumer), but my understanding is that sauce is something you make independent of the dish, while gravy comes in part from byproducts of cooking which might otherwise be discarded. In data science, there are many types of gravy – data which comes from the processing of other data.

One example is a signal. A signal is essentially a relatively meaningless observation (e.g. the telephone rang) which, when observed in quantity or over time takes on meaning (e.g. the telephone always rings many times around dinner time and very rarely after midnight). Signal analysis is the genesis of many amazing feats of data science lately, especially with the advent of the Internet of Things (IoT). IoT devices produce a myriad of rich signal data, which can be harvested to understand systemic behavior, as well as to hint at rules or other input to more complex learning systems.

Another type of data “gravy” is error conditions. Errors occur when process or data falls outside expected norms. In the past, error conditions were processed, adjudicated, and treated with appropriate dispensation. The systems that process the errors often function like data “doctors,” helping to fix information that would otherwise harm the overall process. A good example would be auto-correct in a word processor.

Imagine that we collected the data from our error conditions and learned from it. Using the auto-correct example, my mobile telephone has learned to spell my surname (which took me until the 3rd grade!) by learning the correct spelling of a word that was not previously in the device’s default dictionary.

It seems that we are only at in the early stages of learning how to make gravy from data. What new capabilities will emerge as more systems keep track of data that they would have otherwise thrown away?

Data synthesis is only in the early stages of learning from data byproducts. As data begets data more and more, we will start to see algorithms learning things that the creators of those algorithms never imagined.

So now we have been to the data feast. We have much to learn about table manners in a world of too much data. We also have much to gain by learning new approaches to thinking about the abundance of riches that comes from the bounty of data available to the organization. As you sit down to your next holiday feast, give a silent word of thanks that you live in an age where the insight you gain is so abundantly enriched by a world where data abounds.

Let's get in touch
Fill out this form, and we'll contact you soon.
FREE Alerts to Changes in Your D&B® Credit Scores!* Sign Up. Get CreditSignal® for FREE!*
NEW! Try D&B Credit Advantage FREE for 30 Days! Get Started. NEW D&B Credit Advantage! Get A Free Trial.