Real Minds AI works with health and research organisations across Australia — from public health agencies to university research teams. Our CTO holds a PhD in Computer Science with 53 peer-reviewed publications and led Victoria’s COVID-19 data engineering response. We deliver AI training, research workflow automation, and capability building. 16 organisations across 7 industries. 4.9 Google rating, 23 reviews.
We’ve been inside the research machine. We know where AI fits.
Your ethics forms take longer than the ethics review. Your systematic review screening is a six-month slog. Your grant application process hasn’t changed since 2015, but the tools have. We help health and research teams adopt AI where it actually matters — not the hype, the workflow.
What we’ve seen from the inside
Our CTO spent 15 years in academic research — not advising from the outside, but publishing 53 peer-reviewed papers, running data linkage projects, and leading Victoria’s COVID-19 data engineering response. When we walk into a research team, we’re not learning your world. We’re coming home to it.
Here’s what we keep seeing:
- Researchers spending more time on admin than research — ethics forms, grant formatting, citation management, data wrangling
- Systematic review screening that takes months when AI can do first-pass in days (with human oversight where it matters)
- Grant proposals that miss because nobody decoded the funder’s hidden rubric — the unspoken priorities buried in their strategic plan and past funded projects
- Data scattered across REDCap, Covidence, EndNote, and seventeen spreadsheets that don’t talk to each other
The research workflow wasn’t designed for AI. But it’s ready for it.
What’s shifted (and what hasn’t)
When we started running AI workshops for research teams twelve months ago, the biggest concern was citation accuracy. AI hallucinated references confidently — wrong authors, wrong years, plausible-sounding DOIs that led nowhere. Researchers rightly didn’t trust it.
That’s shifted. Modern models with web access can verify citations in real time. The risk hasn’t disappeared, but it’s moved — from “AI invents references” to “AI finds references but sometimes misinterprets their relevance.” Different problem, different mitigation.
What hasn’t shifted is the fundamental divide our CTO named after watching dozens of research teams adopt AI:
The “after” researcher doesn’t just use AI to go faster. They use it to do fundamentally different work — testing alternative analytical workflows at scale, converting PICO criteria into machine-readable formats, running multi-model screening comparisons. AI isn’t their assistant. It’s their lab partner.
Where AI actually helps (and where it doesn’t)
We’re honest about boundaries. After running workshops with research teams across multiple universities, here’s the clean delineation:
AI excels at
Abstract screening at scale. Structuring grant outlines. Populating ethics forms from existing proposals. Converting frameworks (PICO) into machine-readable formats. Maintaining consistency across large datasets. Synthesising literature — AI can process a Cochrane-scale review in days.
Humans still lead on
Domain judgement calls in screening. Narrative voice in grant writing. Interpreting statistical nuance. Knowing which author to email for that missing dataset. The things that require lived experience in the field, not pattern matching.
Who we’ve worked with in this sector
VicHealth
AI adoption training for health promotion staff. Structured around policy compliance and safe boundaries for AI use in a government health context.
RMIT University
Ongoing AI training and capability building for psychology research staff. An 8-workshop programme covering Claude Code, AI agent development, and practical workflow automation for research teams.
Dennis is also an active researcher on a long COVID data linkage project at RMIT — not as a consultant, but as a co-investigator. That’s the difference: we don’t just advise research teams, we’re still part of them.
How we work with health and research teams
Research Workflow Diagnosis
Half a day embedded with your team. We map where data flows, where it breaks, and where AI can reduce the admin burden — from ethics applications to literature management to reporting.
The Walkthrough →AI Capability Building
Structured workshops for research teams. Not generic “intro to AI” — we teach your team to screen abstracts, draft grant sections, automate REDCap workflows, and build reproducible analysis pipelines using AI.
AI Training & Workshops →Compliance & Ethics Scoping
For teams navigating AI governance — ethics committee requirements, responsible AI frameworks, data handling policies. We scope what’s needed and build the guardrails so your team can use AI with confidence.
The Discovery →Research Infrastructure Rebuild
When the whole pipeline needs work — data architecture, system integration, workflow automation, AI adoption. We diagnose, fix, and hand over. You keep everything.
Start with a conversation →Questions we hear from research teams
Can AI actually help with ethics applications?
Yes — if you feed it the research proposal, AI can reorganise the same information into the ethics form structure. It’s not generating new content, it’s restructuring what you’ve already written. You still review and sign off. The time saving is significant.
How reliable is AI for systematic review screening?
More consistent than humans at volume, less reliable on edge cases that require domain expertise. In our workshops, AI was more decisive with fewer ambiguities in abstract screening — but a senior researcher found it unreliable for statistical correspondence tasks. The right approach is AI for first-pass, human for judgement calls.
Can AI write our grant applications?
AI excels at structure, organisation, clarity, reviewer simulation, and consistency checks. It struggles with citations, domain-specific narrative voice, and the political nuance of grant strategy. We teach teams to build “context packs” — the funding call, scorecard, organisational info, team details, and a voice sample — so AI produces drafts worth editing, not rewriting.
What about data privacy and research compliance?
This is non-negotiable. We work within your institution’s data governance framework. That means understanding which AI tools are approved, what data can and can’t be processed externally, and how to use AI without compromising participant privacy or research integrity. Dennis has navigated ethics submissions from the researcher side — he knows what the forms ask for and why.
Working in health or research?
We don’t pitch. We listen, then tell you honestly whether AI can help with what you’re facing. If it can’t, we’ll say so.
Book a free 30-min callResponsible AI
Tracy holds the Responsible AI certification from the Institute of Applied Technology and BABL AI. We don’t just talk about ethics — we’re credentialed.
We work with your compliance team, not around them. Australian Privacy Principles guide everything we do. Your data never trains third-party models without explicit consent.
If you use 0% of AI, you’re missing out. If you use 100%, you end up with slop. We find the right balance — AI that augments your people, not replaces them.
© 2026 Real Minds, Artificial Intelligence Pty Ltd · ABN 17 674 830 741 · Melbourne, Australia
