No-Code, Large Scale Deployments, and Multimodal Approaches Will Define NLP in 2022

NLP at a glance

When most people think of natural language processing (NLP), voice assistants like Alexa and Siri come to mind. While human and machine interaction has come a long way, it’s just scratching the surface of what the technology can do. In fact, the most impactful use of NLP doesn’t involve speech at all. 

But let’s start by defining NLP. The technology is a subset of artificial intelligence (AI) and machine learning (ML) that focuses on enabling computers to process and understand human language. While speech is part of it, the most impactful progress in NLP lies in its ability to analyze written text. 

As such, NLP is happening largely behind the scenes. But people are interacting with it more than they know. From hospitals, to financial services, to law offices, NLP is powering a good amount of the reading, writing, and data analysis used to administer these services. Even as a young technology, it’s made its mark in the enterprise over the last several years.

What’s next for NLP

All signs point to this growth continuing. Even in the thick of the global pandemic, NLP spending was on the rise, while general IT spending took a hit. Investments in NLP have continued on that trajectory this year. According to a recent industry survey, 93% of Tech Leaders indicated their NLP budgets grew by at least 10-30% compared to 2020. 

2021 was a promising year for NLP. Use cases have helped users with everything from identifying fake news and toxic content online, to helping speed up clinical trials through better candidate selection. Even beyond healthcare and media, NLP is proving its worth across industries. 

But there are several factors that can take this growth to the next level in 2022. No-code software, advances in large scale deployments, and multimodal approaches to NLP will significantly contribute to its growth in the coming year. Here’s why: 

Low-code is great

Low-code requires little to no expertise in coding to build an application. Not surprisingly, low-code solutions had a moment last year. Simplifying NLP is a surefire way to guarantee continued growth in the field. It enables practitioners of all skill levels to use the technology. To summarize, running many of the most complex deep learning models can now be reduced to a single line of Python code. 

For NLP novices, this lowers the barrier to entry. Having a formal education and hands-on experience with foundational NLP, deep learning, and transfer learning libraries used to be a prerequisite. Now, anyone can get started with just a basic understanding of the technology. 

This isn’t just valuable for those new to the field. For data scientists, the simplification enables a level of automation that empowers them to focus on more important work. This will become increasingly important as the AI talent shortage persists. Low-code solutions have benefits across the board, and fortunately, we’re seeing more of them each day.

No-code AI becomes a reality

In 2022, we’ll build upon the low-code trend with no-code software. This will make AI and ML more approachable for anyone. By putting more power in the hands of domain experts, you eliminate the need for a data scientist, democratizing NLP even further. We’re already seeing this start to play out. 

Consider building a website, for example. What once required coding competency can now largely be done by a graphic designer. This is how no-code will trickle down to users outside of the programmer title. It will also help refine NLP for specific business use cases. After all, if you’re building healthcare AI models to detect COVID-19 in a lung X-Ray, you want a doctor to weigh in more than a data scientist.

The shift of importance from data scientist to domain expert will be gradual, but we’ll see a lot more easily applied no-code options to facilitate this in the coming year. This is similar to the difference between paying programmers to write code and having Excel. No-code is built for a different set of non-technical users. Finally, there is a class of tools that make it possible for them to get acquainted with NLP. 

Fine tuning models to deploy them at scale 

In the aforementioned survey, tech leaders cited accuracy as the most important factor when considering an NLP solution. That said, difficulty tuning models was one of the biggest challenges tech leaders cited . Unfortunately, continually tuning models is critical for accurate results. Equally important, it keeps them from degrading over time. 

Healthcare is an industry where continuous monitoring and tuning is especially important. Technology assumes that fixing a person is like fixing a car. If something is broken, you can simply scan an academic paper or medical journal and apply a solution to fix it. But humans are not that simple. There are many factors at play. Medical history, social determinants of health, how your doctor interprets your results compared to another are just a few. 

By enabling domain experts, in this case medical professionals, to adjust models, we enable them to correctly tune models to specific situations. Very often you need to tune models separately on a larger scale. This is because models perform differently in different production environments. Even if both are in a clinical setting. 

In recent news, a retrospective study in JAMA Internal Medicine found that a model developed to predict sepsis in patients failed to identify two-thirds of those affected. While some providers reported success with the tool, researchers from the University of Michigan Medical School found the results to be substantially less accurate when applied to their own patients.

Considering how models will perform in different settings on different populations can be the difference between life and death in healthcare. But it’s important in other industries too. The good news is we’re getting better at this. Now, we’re enabling users to deploy models at scale faster and more accurately than ever before. 

Multimodal solutions go beyond NLP to the next level

Human language is not black and white. We interpret meaning from written language, speech, images, and more. As a result, we need ML techniques that are able to “read,” “see,” and “listen” all at the same time. Multi-modal learning techniques, which use different modalities of data by combining tools like NLP and computer vision, are key for these use cases.

While NLP models are great at processing text, many real-word applications use documents with more complex formats. For example, healthcare systems often include visual lab results, sequencing reports, clinical trial forms, and other scanned documents. When NLP is used alone for document understanding, the layout and style are compromised. 

However, with new advances in multi-modal learning, models can learn from both the text in documents via NLP and visual layout through technologies like computer vision. Combining multiple technologies into a certain solution to enable better results is at the core of multimodal learning. We’re starting to see more of this move from research to production. 

2021 has been a standout year for NLP and we can expect that to continue into the new year. With easier to use tools, more accurate results, larger deployments, and pairing abilities with other powerful AI technologies, it will be interesting to see where 2022 takes us.

David Talby

David Talby

CTO, John Snow Labs

I help companies build real-world AI systems, turning recent scientific advances into products and services. My specialty is applying machine learning, deep learning, and natural language processing in healthcare.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *