In a recent episode of AI Explained: Healthcare and Life Sciences, host David Samwel spoke with Brett Dooies, the Head of Product at Verifiable, a 2024 Salesforce partner innovation award winner. Dooies, who began his career doing safety analysis for nuclear reactors, brings a unique perspective on system safety to the forefront of AI-driven healthcare credentialing.
Verifiable is focused on rebuilding the infrastructure of trust for provider networks. Dooies's role involves leading the product and design teams in reimagining what credentialing can be in the future state by leveraging current technologies. He notes that the goal is to enable credentialing specialists and medical services professionals to do their jobs better, more efficiently, and more productively, while also having a much better experience.
But how does experience from nuclear safety apply to generative AI?
In the podcast, Dooies draws parallels between his background in nuclear engineering and the deployment of generative AI, particularly in a highly regulated field like healthcare credentialing.
- Safety Culture and Human in the Loop: The nuclear industry emphasizes a "safety culture," recognizing that policies and procedures alone are not enough; humans must carry the culture forward. Similarly, with generative AI, Dooies stresses the importance of "human in the loop" design. Just as a human reviews the results of nuclear analysis codes and asks critical thinking questions, generative AI systems in credentialing require human review to ensure the output makes sense and to account for edge cases.
- Pre-job Briefs to Evals: In nuclear engineering, "pre-job briefs" were conducted before major projects to consider what was already known and how it could impact the current analysis. In Gen AI design for credentialing, this is paralleled by "evals," which are tests run repeatedly to cover the core problem space and all known edge cases, as identified by staff experts.
- Post-job Briefs to Feedback Loops: The "post-job brief" in nuclear was a review of what went well and what could be learned. In Gen AI, this corresponds to feedback loops. When users accept or adjust the AI's results, any adjustment provides feedback to the system. This input is used to capture edge cases, allowing the system to make a better decision next time and setting the human up for confirmation rather than change.
Addressing Hallucinations and Building Trust
Hallucinations are an acknowledged reality of current AI systems, but Dooies believes engineers must "engineer around" them. To build trust in AI output, the system must show users the steps it has taken and highlight instances of low confidence. The design process also involves identifying areas where Gen AI is not yet effective and opting for:
- Declarative automation: Using traditional automation instead of generative AI.
- Human intervention: Bringing a human in to enter data or make a critical choice.
Dooies also spoke on the importance of separating low-value tasks from high-value tasks, a principle Verifiable has refined in the credentialing space. This approach aims to focus the attention of credentialing specialists on high-value areas, like investigating sanction hits or lapsed licenses, rather than "plain vanilla" straight-ahead cases.
The Speed of Workflow is the Product
Dooies introduced the concept of "experience driven automation," where the speed of the workflow is the actual product for users in document-heavy organizations. He noted that in credentialing, a perfectly clean file can take as much time as one with a sanction hit. By automating the non-differentiated steps, the workflow can be inverted. What might be 100+ manual steps is reduced to about three checkpoints. The AI agent handles the automated steps, and the human is presented with a summation of the things that require their attention.
This process allows credentialing specialists to move to higher-value tasks, like provider relations and improving the provider experience. Salesforce aids in this by providing a platform for provider outreach, tracking interactions, and creating a seamless multi-channel experience.
So what’s the TLDR?
- Embrace Human in the Loop: In regulated industries, Gen AI systems need a "safety culture" where humans are involved to review results and apply critical thinking, similar to practices in the nuclear industry.
- Build Trust with Transparency: Combat hallucinations by exposing the AI's steps and highlighting areas of low confidence to users.
- Iterate and Use Feedback Loops: Design systems that use human adjustments and expertise as feedback to improve system decisions and capture edge cases over time.
- Move from Pilot to Production Quickly: Once you have trust and guardrails in place, and the system proves its viability, quickly scale from pilot to production by simply turning it on for more users and use cases.
- Solve "Un-sexy" Problems for Big Gains: Focus on automating manual, non-differentiated, "messy" operational functions—like reading and typing data from old files—as these improvements significantly move the needle on end-to-end process time.
- Invert the Workflow: Redesign workflows so that an AI agent handles the automated steps and raises only the critical issues to the human specialist, reducing 100+ manual steps to a few checkpoints.
- Focus on Domain Expertise: While foundation models go wide, organizations should apply new tooling to specific problem spaces (like credentialing) where they have deep, unique knowledge.
- Keep Experimenting: Builders in tech should maintain a side project or hobby to get hands-on experience with new tools and understand their limitations, which can inform product roadmaps and strategic bets.
- Anticipate Dynamic UI/UX: Look ahead to software design where the user interface and experience are dynamic, morphing in real time to the user's needs, making the experience even more personalized.





