Actionable big data: How to bridge the gap between data scientists and engineers

The buzz around big data has created a widespread misconception: that its mere existence can provide a company with actionable insights and positive business outcomes.

The reality is a bit more complicated. To get value from big data, you need a capable team of data scientists to sift through it. For the most part, corporations understand this, as evidenced by the 15x – 20x growth in data scientist jobs from 2016 to 2019. However, even if you have a capable team of data scientists on hand, you still need to clear the major hurdle of putting those ideas into production. In order to realize true business value, you have to make sure your engineers and data scientists to work in concert with one another.

The gap

At their core, data scientists are innovators who extract new ideas and thoughts from the data your company ingests on a daily basis, while engineers in turn build off of those ideas and create sustainable lenses in which to view our data.

Data scientists are tasked with deciphering, manipulating, and merchandising data for positive business outcomes. To accomplish this feat, they perform a variety of tasks ranging from data mining to statistical analysis. Collecting, organizing, and interpreting data is all done in the pursuit of identifying significant trends and relevant information.

While engineers certainly work in concert with data scientists, there are some distinct differences between the two roles. One of the fundamental differences is that engineers place a decidedly higher value on “productional readiness” of systems. From the resilience and security of the models generated by data scientists to the actual format and scalability, engineers want their systems to be fast and reliably functional.

In other words: Data scientists and engineering teams have different day-to-day concerns.

This begs the question, how can you position both roles for success and ultimately extract the most meaningful insights from your data?

The answer lies in dedicating time and resources to perfecting data and engineering relations. Just as it’s important to reduce the clutter or “noise” around data sets, it’s also important to smooth any and all friction between these two teams who play vital roles in your business success. Here are three critical steps to making this a reality.

1. Cross-training

It’s not enough to simply put a few scientists and a few engineers in a room and ask them to solve the world’s problems. You first need to get them to understand each other’s terminology and start speaking the same language.

One way to do this is to cross-train the teams. By pairing scientists and engineers into pods of two, you can encourage shared learning and break down barriers. For data scientists, this means learning coding patterns, writing code in a more organized way, and, perhaps most importantly, understanding the tech stack and infrastructure trade-offs involved with introducing a model into production.

With both sides in sync with each other’s goals and workflows, we can foster a more efficient software development process. And in the fast-paced tech world, efficiency gains that can be realized through continued education and clear communication across data science and engineering are a huge win for any company.

2. Placing a higher value on clean code

With your data and engineering teams speaking the same language, you can focus on more tactical aspects, like clean, easy-to-implement code.

When a data scientist is in the early stages of working on a project, the iterative and experimental style of their workflow can seem chaotic to an engineer working on production systems. The mashup of inputs, both internal and external, are being manipulated as they begin to train their models. Operating within a fluid environment like this is commonplace for data scientists but can be problematic for engineers. If code from the experimentation or prototyping phase is passed on to engineers, you’ll soon hit a roadblock. That manifests itself in the model falling short in terms of stability, scalability, or overall speed.

To account for this roadblock, my team has invested time and resources into standardization. The end result is that our data scientists and engineers are aligned on a variety of parameters from coding standards, data access patterns (for example, use S3 for file IO and avoid local files), and security standards. This framework gives our data scientists the means of writing code that’s performant within our ecosystem while allowing them to focus on overcoming challenges specific to their domain of expertise.

3. Creating a features store

One of the best ways to maximize value from clean code is to “productize” it internally, creating an environment where both engineers and data scientists can lean on their strengths. We call this the “features store,” which is essentially a centralized location for storing documented and curated features (independent variables).

The purpose of this data management layer is to feed curated data into our machine learning algorithms. Aside from standardization and ease-of-use, the main benefit for our team is that our feature store enables consistency between the models. It has significantly increased the stability of our algorithms and has improved our data team’s overall efficiency. Data scientists and engineers know that when they take a feature off the shelf, it’s been stress-tested for reliability and won’t break when it goes into production.

The proliferation of big data and machine learning at the organizational level has created new opportunities and new challenges along the way. Phase one was the realization that big data in and of itself wasn’t going to create efficiencies — you need innovative thinkers to make sense of it. Phase two is about helping those good people, the data scientists who are incredible at finding value, to put their ideas into practice in a way that meets the rigors of an engineering team operating at scale, with thousands of customers relying on the product.

Jonathan Salama is CTO and Co-Founder of Transfix, an online freight marketplace.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *