AI for Health and Research Teams | Real Minds AI

Real Minds AI works with health and research organisations across Australia — from public health agencies to university research teams. Our CTO holds a PhD in Computer Science with 53 peer-reviewed publications and led Victoria’s COVID-19 data engineering response. We deliver AI training, research workflow automation, and capability building. 16 organisations across 7 industries. 4.9 Google rating, 23 reviews.

We’ve been inside the research machine. We know where AI fits.

Your ethics forms take longer than the ethics review. Your systematic review screening is a six-month slog. Your grant application process hasn’t changed since 2015, but the tools have. We help health and research teams adopt AI where it actually matters — not the hype, the workflow.

What we’ve seen from the inside

Our CTO spent 15 years in academic research — not advising from the outside, but publishing 53 peer-reviewed papers, running data linkage projects, and leading Victoria’s COVID-19 data engineering response. When we walk into a research team, we’re not learning your world. We’re coming home to it.

Here’s what we keep seeing:

  • Researchers spending more time on admin than research — ethics forms, grant formatting, citation management, data wrangling
  • Systematic review screening that takes months when AI can do first-pass in days (with human oversight where it matters)
  • Grant proposals that miss because nobody decoded the funder’s hidden rubric — the unspoken priorities buried in their strategic plan and past funded projects
  • Data scattered across REDCap, Covidence, EndNote, and seventeen spreadsheets that don’t talk to each other

The research workflow wasn’t designed for AI. But it’s ready for it.

What’s shifted (and what hasn’t)

When we started running AI workshops for research teams twelve months ago, the biggest concern was citation accuracy. AI hallucinated references confidently — wrong authors, wrong years, plausible-sounding DOIs that led nowhere. Researchers rightly didn’t trust it.

That’s shifted. Modern models with web access can verify citations in real time. The risk hasn’t disappeared, but it’s moved — from “AI invents references” to “AI finds references but sometimes misinterprets their relevance.” Different problem, different mitigation.

What hasn’t shifted is the fundamental divide our CTO named after watching dozens of research teams adopt AI:

“There’ll be two kinds of researchers. The researcher from before. And the researcher afterwards. And the ones afterwards are going to be doing this completely different thing.” — Dr Dennis Wollersheim, Co-Founder & CTO

The “after” researcher doesn’t just use AI to go faster. They use it to do fundamentally different work — testing alternative analytical workflows at scale, converting PICO criteria into machine-readable formats, running multi-model screening comparisons. AI isn’t their assistant. It’s their lab partner.

Where AI actually helps (and where it doesn’t)

We’re honest about boundaries. After running workshops with research teams across multiple universities, here’s the clean delineation:

“AI is really good at volume, consistency, and endurance. Humans are good at constructing meaning.” — Dr Dennis Wollersheim

AI excels at

Abstract screening at scale. Structuring grant outlines. Populating ethics forms from existing proposals. Converting frameworks (PICO) into machine-readable formats. Maintaining consistency across large datasets. Synthesising literature — AI can process a Cochrane-scale review in days.

Humans still lead on

Domain judgement calls in screening. Narrative voice in grant writing. Interpreting statistical nuance. Knowing which author to email for that missing dataset. The things that require lived experience in the field, not pattern matching.

“Using AI to populate ethics forms isn’t cheating — you’re inputting the research proposal and letting AI organise it for the form. It’s the same information with a different lens.” — Workshop participant, RMIT research group
53Peer-reviewed publications
$3M+Research grant funding
3Health/research orgs served
4.9Google rating (23 reviews)

Who we’ve worked with in this sector

VicHealth

AI adoption training for health promotion staff. Structured around policy compliance and safe boundaries for AI use in a government health context.

RMIT University

Ongoing AI training and capability building for psychology research staff. An 8-workshop programme covering Claude Code, AI agent development, and practical workflow automation for research teams.

Dennis is also an active researcher on a long COVID data linkage project at RMIT — not as a consultant, but as a co-investigator. That’s the difference: we don’t just advise research teams, we’re still part of them.

Questions we hear from research teams

Can AI actually help with ethics applications?

Yes — if you feed it the research proposal, AI can reorganise the same information into the ethics form structure. It’s not generating new content, it’s restructuring what you’ve already written. You still review and sign off. The time saving is significant.

How reliable is AI for systematic review screening?

More consistent than humans at volume, less reliable on edge cases that require domain expertise. In our workshops, AI was more decisive with fewer ambiguities in abstract screening — but a senior researcher found it unreliable for statistical correspondence tasks. The right approach is AI for first-pass, human for judgement calls.

Can AI write our grant applications?

AI excels at structure, organisation, clarity, reviewer simulation, and consistency checks. It struggles with citations, domain-specific narrative voice, and the political nuance of grant strategy. We teach teams to build “context packs” — the funding call, scorecard, organisational info, team details, and a voice sample — so AI produces drafts worth editing, not rewriting.

What about data privacy and research compliance?

This is non-negotiable. We work within your institution’s data governance framework. That means understanding which AI tools are approved, what data can and can’t be processed externally, and how to use AI without compromising participant privacy or research integrity. Dennis has navigated ethics submissions from the researcher side — he knows what the forms ask for and why.

Working in health or research?

We don’t pitch. We listen, then tell you honestly whether AI can help with what you’re facing. If it can’t, we’ll say so.

Book a free 30-min call
Disability & Community Education Government Not-for-Profit
4.9★ from 23 Google Reviews | Responsible AI Certified (IAT) | Melbourne-based, nationally delivered

Responsible AI

IAT Certified

Tracy holds the Responsible AI certification from the Institute of Applied Technology and BABL AI. We don’t just talk about ethics — we’re credentialed.

Your data stays yours

We work with your compliance team, not around them. Australian Privacy Principles guide everything we do. Your data never trains third-party models without explicit consent.

Human-centred AI

If you use 0% of AI, you’re missing out. If you use 100%, you end up with slop. We find the right balance — AI that augments your people, not replaces them.

© 2026 Real Minds, Artificial Intelligence Pty Ltd · ABN 17 674 830 741 · Melbourne, Australia

Ask us anything
Ask us anything