Real-time situational awareness is critical for making health and human services (HHS) agencies as secure and responsive as possible. In this Q&A, Dataminr Chief AI Officer Alex Jaimes talks about how AI can vastly improve that awareness.
How can AI improve situational awareness for HHS organizations?
AI excels at processing very large amounts of data from different sources. For example, Dataminr’s products use AI to detect events in real time from over a million public data sources. Those sources are extremely diverse (text in different languages, audio, images, videos, sensor data) and appear on many different types of platforms (the internet, social media, the deep and dark web and more).
Dataminr solutions produce insights on relevant events as they occur. Having that information translates to quicker and more effective responses, which have significant benefits for constituents.
What does an organization need for AI adoption?
Implementing AI requires simultaneous efforts on several fronts. You must establish responsible use policies, including mechanisms to protect data and provide appropriate access to it. You also have to make sure your data is available. AI requires data, and organizations often struggle with connecting that data to AI systems.
AI adoption also depends on having adequate AI expertise as well as proper training to ensure that those using the technology understand its benefits and limitations.
GenAI In State and Local Government: Adoption Challenges and Tips
Understand key adoption challenges and get recommendations for how to best implement AI.
Read NowHow can HHS leaders help evolve their organization’s use of AI?
Embracing AI can start at many different levels, depending on where the HHS agency is in its journey. Leaders can begin with AI solutions for specific use cases in their organization. In many of those cases, it’s significantly more cost-effective to use third-party services than to build functionalities in house.
Some use cases do require internal efforts to develop solutions. These cases need bigger upfront investments in people, technology and more. For such projects, you should develop a strategy that identifies your goals and where to target investments to achieve those goals.
How can agencies make sure their technology partners prioritize responsible AI?
Since every AI use case can be different, a lot depends on where and how the technology is applied. So it can be hard to outline specific ways agencies can ensure their technology partner prioritizes responsible AI.
But it’s important to understand your partner’s business model. It helps identify how they generally use AI and where responsible AI practices might come into play.
At Dataminr, responsible AI practices are built into what we do. Our products focus on detecting events from public data that are relevant to a wide range of customers in the public sector, the private sector and the media. It’s in our interest to ensure that the alerts we provide on those events are relevant, ethical and responsible.
This means we make significant efforts in ingesting a wide range of data sources from all over the world. For our AI platform to excel, proper testing and evaluation of those sources is critical.
The original version of this article first appeared in Government Technology. It has been edited for clarity and length.
Opportunities for Operationalizing AI in Government
Learn the challenges governments face in deploying AI and how they can operationalize it to deliver better services.
Read Now