Artificial Intelligence

The field of AI has been in a state of rapid transformation in the past few years, and 2019 was no exception. This was the year when Natural Language Processing (NLP) and Deep Learning Neural Networks converged to bring about a number of exciting new AI advances. At Dataminr, our AI team has been at the forefront of this trend, innovating to bring our clients the earliest and most actionable alerts.

With the advent of Deep Learning Neural Networks and advances in hardware (GPUs), processing large amounts of textual data is now possible with highly accurate models that perform billions of lightning-fast calculations. This is new territory for the field historically known as “big data,” and throughout 2019, many new types of NLP-based AI applications flourished because of these new possibilities. In some of these cases, Deep Learning produced better performance than what was possible with previously available methods. In others, it has led to advances that have opened the door to tackle entirely new problems that just a few years ago seemed completely unreachable.

At Dataminr, we have been pushing forward real-time NLP and deploying Deep Learning Neural Networks on textual data for years. Across our AI platform today, the majority of our AI models are deep learning neural networks, ranging from convolutional neural networks to recurrent neural networks, including Long Short-term Memory Networks (LSTMs) and, more recently, Transformer Neural Networks. Dataminr’s AI platform has long processed and “read” text using Deep Learning Neural Networks as part of the process to create alerts. One of the most interesting advancements of Dataminr’s AI platform over the last two years has been the extension of our NLP work from “reading” text (Natural Language Understanding) to our work “writing” text (Natural Language Generation, NLG). In other words, extending our AI platform into the field of automatic writing done by Artificial Intelligence algorithms.

Dataminr’s AI platform processes billions of public data inputs per day, from thousands of public data sources, in different languages and in different formats (text, image, video, sensor, and combinations of all of the above). Most public data sources integrated into Dataminr’s AI platform contain text that can be hard to understand quickly if read independently without any context. Social media and other forms of user-generated text often contain slang, shorthand, and technical terms that require context to fully understand. These features and qualities can make “raw text” from user-generated posts very difficult to understand–especially for our clients who are receiving alerts and responding to incidents in real time. In order to address this, Dataminr augments alerts with captions. These captions are succinct summaries that describe the most critical facts about the event in a quickly readable and digestible format.

Automatic caption generation is a machine learning task that is related to automatic summarization in the field of Natural Language Processing (NLP). In this field, the aim of AI algorithms is to extract key information from long articles to produce short summaries. However, the NLP challenge faced for generating alert captions is different and more challenging. In order to create an effective real-time alert caption, additional context that does not exist in the text of the original post is often required. For example, the specific words, phrases, and entities one needs to summarize what is being described in a social media post are most often actually not found within the social media posts themselves. To make this challenge even more complex, the phraseology and vernacular of the digital domain is in constant flux–the meaning of words and phrases, and the usage of syntax structures are continually shifting across the interconnected digital information landscape.

To integrate captions into Dataminr we formulated a multi-dimensional solution, combining the real-time power of Deep Learning Neural Networks with the unique benefits of an online human-AI feedback loop. In early 2018, we started delivering our first captions in our alerts. As a first step, we tasked a 24/7 team of professional Domain Experts with writing short, succinct captions for our alerts in our real-time alert pipeline. Dataminr’s highly talented Domain Experts come with experience in our primary customer industries, ranging from journalists, to public sector officials, to finance professionals, giving them an invaluable perspective. Domain Experts have long been an essential component of Dataminr’s success, ensuring the alerts we send to our clients are of the best possible quality, creating historical labeled datasets to train our AI on, and making Dataminr’s AI platform more sophisticated by integrating key knowledge domains into our models.

Alert captions proved to be beneficial to our clients right from the start, achieving the goal of making alerts far more easily understandable in real time, and effectively reducing the time between alert delivery and client-first response. At first, our team writing captions could only make use of the then-available alert building blocks – ranging from alert topic and event type, as well as highlighted entities (Named Entity Recognition or NER), among others. These available elements were useful and kept the caption writing time to roughly a minute on average. But right from the start, we knew it would be our advanced applications of AI, in concert with an ongoing human-AI feedback loop, that would enable captions to become truly real-time and scalable.

For most advanced AI applications, getting an accurate labeled dataset of an adequate size is the essential first step. By early 2019, the historical accumulated dataset of captions created by Domain Experts had grown to a large enough volume that we could effectively train and deploy Deep Learning Neural Networks and complete the architecture of our initial multi-dimensional solution. To tackle this challenge, we deployed a novel neural network architecture called a Transformer Neural Network, first introduced in seminal 2017 publication Attention is All You Need.

Transformer Neural Networks provide a similar capacity to Recurrent Neural Networks to represent long range contextual information in inputs, but without the requirement for recurrent connections in the network’s structure. The internal network layers of a Transformer are highly connected with variable weights across individual elements of an input sequence. This allows structurally simpler feed-forward layers with a constant path length to incorporate information from the entire input sequence (in our case one or more social media messages) into the prediction for each element of an output sequence (our alert caption), without the need to direct intermediate outputs back to earlier network layers.

Incredible results have been achieved by applying these advanced Transformer Deep Learning Neural Networks to NLG for creating captions. Just two years into the Dataminr AI platform’s work in Natural Language Generation, an exponentially increasing number of highly accurate captions are written purely by AI. These alert captions are generated in milliseconds and delivered directly to customers without Domain Expert involvement. For the captions that Domain Experts do review online, a steadily increasing percentage require no edits or only minor edits to be fully accurate. Combining NLG with Deep Learning Neural Networks has dramatically increased the speed, quality, and scalability of alert captions.

One of the things I find most exciting is how the ongoing role of Domain Experts continues to propel forward the capacity of our AI caption-writing algorithms. Domain Experts operating in an online human-AI feedback loop enable our Deep Learning Neural Network models to be continuously responsive to the ever-changing real-time information landscape. Every single caption edited by a domain expert in realtime provides a uniquely valuable “recency-weighted” signal to our AI platform that is most in sync with the dynamic nuances of the shifting digital landscape.

This is an example of a human-AI feedback loop in its most advanced design–not designed to use AI to replace humans, but designed to enable a non-scaling team of highly skilled and expert domain experts to dynamically affect an exponentially growing set of real-time AI algorithms that run alongside them without their direct involvement. This is something we have successfully accomplished not only for alert captions, but also for many other facets of our alert creation process. Domain Experts are one of Dataminr’s greatest assets, and we have successfully used Deep Learning Neural Networks to model and algorithmically extend their “domain expertise” to millions upon millions of real-time actions performed by our AI platform. While the size of Domain Experts team has remained constant, their reach has scaled exponentially thanks to our AI.

Dataminr has long been a pioneer in real-time AI, deploying novel approaches to tackle some of the hardest time-sensitive challenges. The speed and effectiveness of Dataminr’s integration of NLG to create alert captions in real time is an amazing example of this–this is something that simply would not have been possible without the convergence of NLP and Deep Learning Neural Networks. As the field of AI continues to evolve and we continue to experiment with new architectures and approaches, and as our data assets further expand, we’ll continue to pioneer more innovative methods for pushing forward the possibilities of real-time AI.

Want to be a pioneer in applying AI to real-time challenges and be an innovator in the field of real-time AI? Check out our open roles here.

Author
Jason Wilcox
December 20, 2019
  • Artificial Intelligence
  • Blog

Related resources

Blog

Why NATO Must Embrace Private-sector AI to Counter Cyber Threats

As NATO bolsters its cyber defenses and responds to the rising tide of cyber attacks, the alliance should strengthen its private sector partnerships and employ AI-powered solutions.

Blog

Public Safety Challenges and Tips for Paris 2024 Olympics

Explore must-have strategies for ensuring public safety during the Paris 2024 Olympics, including expert insights and practical tips for public sector organizations to address complex security challenges.

Blog

Olympic Security: Event Detection From Paris 1924 to Paris 2024

It's been 100 years since Paris last hosted the Olympic Games. Many of the same security challenges remain, but AI and an unprecedented amount of public data has changed how to protect large-scale events.